Test Report: Docker_macOS 14079

                    
                      798c4e8fed290cfa318a9fb994a7c6f555db39c1:2022-06-01:24216
                    
                

Test fail (21/288)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (252.92s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220601032706-2342 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0601 03:27:09.498336    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:27:29.979190    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:28:10.941735    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:29:32.864440    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:30:49.580335    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:30:49.586781    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:30:49.599069    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:30:49.621244    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:30:49.662203    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:30:49.744410    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:30:49.904641    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:30:50.226859    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:30:50.869293    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:30:52.151648    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:30:54.712850    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:30:59.834357    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:31:10.075345    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220601032706-2342 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m12.892685487s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220601032706-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node ingress-addon-legacy-20220601032706-2342 in cluster ingress-addon-legacy-20220601032706-2342
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 03:27:06.531914    4409 out.go:296] Setting OutFile to fd 1 ...
	I0601 03:27:06.532091    4409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:27:06.532096    4409 out.go:309] Setting ErrFile to fd 2...
	I0601 03:27:06.532100    4409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:27:06.532199    4409 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 03:27:06.532495    4409 out.go:303] Setting JSON to false
	I0601 03:27:06.547438    4409 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1596,"bootTime":1654077630,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 03:27:06.547526    4409 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 03:27:06.569684    4409 out.go:177] * [ingress-addon-legacy-20220601032706-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 03:27:06.591533    4409 notify.go:193] Checking for updates...
	I0601 03:27:06.613274    4409 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 03:27:06.635353    4409 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 03:27:06.656614    4409 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 03:27:06.678408    4409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 03:27:06.699547    4409 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 03:27:06.721875    4409 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 03:27:06.794814    4409 docker.go:137] docker version: linux-20.10.14
	I0601 03:27:06.794981    4409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 03:27:06.922218    4409 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 10:27:06.866872385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 03:27:06.944263    4409 out.go:177] * Using the docker driver based on user configuration
	I0601 03:27:06.966120    4409 start.go:284] selected driver: docker
	I0601 03:27:06.966141    4409 start.go:806] validating driver "docker" against <nil>
	I0601 03:27:06.966168    4409 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 03:27:06.969580    4409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 03:27:07.095472    4409 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 10:27:07.041472425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 03:27:07.095596    4409 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 03:27:07.095748    4409 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 03:27:07.117696    4409 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 03:27:07.139217    4409 cni.go:95] Creating CNI manager for ""
	I0601 03:27:07.139250    4409 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 03:27:07.139265    4409 start_flags.go:306] config:
	{Name:ingress-addon-legacy-20220601032706-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220601032706-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 03:27:07.161435    4409 out.go:177] * Starting control plane node ingress-addon-legacy-20220601032706-2342 in cluster ingress-addon-legacy-20220601032706-2342
	I0601 03:27:07.205295    4409 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 03:27:07.227380    4409 out.go:177] * Pulling base image ...
	I0601 03:27:07.270311    4409 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0601 03:27:07.270312    4409 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 03:27:07.337621    4409 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 03:27:07.337644    4409 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 03:27:07.342455    4409 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0601 03:27:07.342474    4409 cache.go:57] Caching tarball of preloaded images
	I0601 03:27:07.342791    4409 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0601 03:27:07.386757    4409 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0601 03:27:07.407989    4409 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0601 03:27:07.508001    4409 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0601 03:27:09.513068    4409 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0601 03:27:09.513254    4409 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0601 03:27:10.134153    4409 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0601 03:27:10.134418    4409 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/config.json ...
	I0601 03:27:10.134441    4409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/config.json: {Name:mk1401ac79bfab55145957379cab0aac6f1010c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:27:10.134744    4409 cache.go:206] Successfully downloaded all kic artifacts
	I0601 03:27:10.134775    4409 start.go:352] acquiring machines lock for ingress-addon-legacy-20220601032706-2342: {Name:mkae8d6b7c1a48a21135bd08ca20d984d2f0962a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:27:10.134903    4409 start.go:356] acquired machines lock for "ingress-addon-legacy-20220601032706-2342" in 121.039µs
	I0601 03:27:10.134923    4409 start.go:91] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220601032706-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-202206010
32706-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 03:27:10.134986    4409 start.go:131] createHost starting for "" (driver="docker")
	I0601 03:27:10.184981    4409 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0601 03:27:10.185378    4409 start.go:165] libmachine.API.Create for "ingress-addon-legacy-20220601032706-2342" (driver="docker")
	I0601 03:27:10.185416    4409 client.go:168] LocalClient.Create starting
	I0601 03:27:10.185560    4409 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 03:27:10.185627    4409 main.go:134] libmachine: Decoding PEM data...
	I0601 03:27:10.185650    4409 main.go:134] libmachine: Parsing certificate...
	I0601 03:27:10.185743    4409 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 03:27:10.185790    4409 main.go:134] libmachine: Decoding PEM data...
	I0601 03:27:10.185806    4409 main.go:134] libmachine: Parsing certificate...
	I0601 03:27:10.186639    4409 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220601032706-2342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 03:27:10.252696    4409 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220601032706-2342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 03:27:10.252803    4409 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220601032706-2342] to gather additional debugging logs...
	I0601 03:27:10.252824    4409 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220601032706-2342
	W0601 03:27:10.316090    4409 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220601032706-2342 returned with exit code 1
	I0601 03:27:10.316126    4409 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220601032706-2342]: docker network inspect ingress-addon-legacy-20220601032706-2342: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220601032706-2342
	I0601 03:27:10.316156    4409 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220601032706-2342]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220601032706-2342
	
	** /stderr **
	I0601 03:27:10.316239    4409 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 03:27:10.378294    4409 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003ae910] misses:0}
	I0601 03:27:10.378329    4409 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 03:27:10.378347    4409 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220601032706-2342 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 03:27:10.378416    4409 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601032706-2342
	I0601 03:27:10.478842    4409 network_create.go:99] docker network ingress-addon-legacy-20220601032706-2342 192.168.49.0/24 created
	I0601 03:27:10.478897    4409 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20220601032706-2342" container
	I0601 03:27:10.478987    4409 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 03:27:10.542990    4409 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220601032706-2342 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601032706-2342 --label created_by.minikube.sigs.k8s.io=true
	I0601 03:27:10.607460    4409 oci.go:103] Successfully created a docker volume ingress-addon-legacy-20220601032706-2342
	I0601 03:27:10.607684    4409 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-20220601032706-2342-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601032706-2342 --entrypoint /usr/bin/test -v ingress-addon-legacy-20220601032706-2342:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 03:27:11.066660    4409 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-20220601032706-2342
	I0601 03:27:11.066699    4409 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0601 03:27:11.066713    4409 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 03:27:11.066838    4409 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220601032706-2342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 03:27:15.578160    4409 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220601032706-2342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (4.511240197s)
	I0601 03:27:15.578192    4409 kic.go:188] duration metric: took 4.511459 seconds to extract preloaded images to volume
	I0601 03:27:15.578273    4409 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 03:27:15.707285    4409 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-20220601032706-2342 --name ingress-addon-legacy-20220601032706-2342 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601032706-2342 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-20220601032706-2342 --network ingress-addon-legacy-20220601032706-2342 --ip 192.168.49.2 --volume ingress-addon-legacy-20220601032706-2342:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 03:27:16.084732    4409 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601032706-2342 --format={{.State.Running}}
	I0601 03:27:16.157196    4409 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601032706-2342 --format={{.State.Status}}
	I0601 03:27:16.232930    4409 cli_runner.go:164] Run: docker exec ingress-addon-legacy-20220601032706-2342 stat /var/lib/dpkg/alternatives/iptables
	I0601 03:27:16.371116    4409 oci.go:247] the created container "ingress-addon-legacy-20220601032706-2342" has a running status.
	I0601 03:27:16.371169    4409 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601032706-2342/id_rsa...
	I0601 03:27:16.608137    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601032706-2342/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0601 03:27:16.608194    4409 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601032706-2342/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 03:27:16.720387    4409 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601032706-2342 --format={{.State.Status}}
	I0601 03:27:16.789419    4409 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 03:27:16.789436    4409 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-20220601032706-2342 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 03:27:16.913557    4409 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601032706-2342 --format={{.State.Status}}
	I0601 03:27:16.982714    4409 machine.go:88] provisioning docker machine ...
	I0601 03:27:16.982774    4409 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-20220601032706-2342"
	I0601 03:27:16.982861    4409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:17.052041    4409 main.go:134] libmachine: Using SSH client type: native
	I0601 03:27:17.052316    4409 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52975 <nil> <nil>}
	I0601 03:27:17.052355    4409 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-20220601032706-2342 && echo "ingress-addon-legacy-20220601032706-2342" | sudo tee /etc/hostname
	I0601 03:27:17.178213    4409 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-20220601032706-2342
	
	I0601 03:27:17.178294    4409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:17.247590    4409 main.go:134] libmachine: Using SSH client type: native
	I0601 03:27:17.247753    4409 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52975 <nil> <nil>}
	I0601 03:27:17.247769    4409 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-20220601032706-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-20220601032706-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-20220601032706-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 03:27:17.365151    4409 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 03:27:17.365176    4409 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 03:27:17.365202    4409 ubuntu.go:177] setting up certificates
	I0601 03:27:17.365212    4409 provision.go:83] configureAuth start
	I0601 03:27:17.365279    4409 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:17.434289    4409 provision.go:138] copyHostCerts
	I0601 03:27:17.434325    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 03:27:17.434375    4409 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 03:27:17.434833    4409 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 03:27:17.434946    4409 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 03:27:17.435144    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 03:27:17.435192    4409 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 03:27:17.435197    4409 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 03:27:17.435259    4409 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 03:27:17.435380    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 03:27:17.435413    4409 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 03:27:17.435418    4409 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 03:27:17.435473    4409 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 03:27:17.435586    4409 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-20220601032706-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-20220601032706-2342]
	I0601 03:27:17.595251    4409 provision.go:172] copyRemoteCerts
	I0601 03:27:17.595306    4409 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 03:27:17.595376    4409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:17.663923    4409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52975 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601032706-2342/id_rsa Username:docker}
	I0601 03:27:17.750864    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0601 03:27:17.750929    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 03:27:17.767161    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0601 03:27:17.767230    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0601 03:27:17.784184    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0601 03:27:17.784253    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 03:27:17.800922    4409 provision.go:86] duration metric: configureAuth took 435.692698ms
	I0601 03:27:17.800935    4409 ubuntu.go:193] setting minikube options for container-runtime
	I0601 03:27:17.801156    4409 config.go:178] Loaded profile config "ingress-addon-legacy-20220601032706-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0601 03:27:17.801267    4409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:17.870409    4409 main.go:134] libmachine: Using SSH client type: native
	I0601 03:27:17.870583    4409 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52975 <nil> <nil>}
	I0601 03:27:17.870610    4409 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 03:27:17.989696    4409 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 03:27:17.989711    4409 ubuntu.go:71] root file system type: overlay
	I0601 03:27:17.989866    4409 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 03:27:17.989938    4409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:18.058299    4409 main.go:134] libmachine: Using SSH client type: native
	I0601 03:27:18.058463    4409 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52975 <nil> <nil>}
	I0601 03:27:18.058520    4409 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 03:27:18.187747    4409 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 03:27:18.187825    4409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:18.256549    4409 main.go:134] libmachine: Using SSH client type: native
	I0601 03:27:18.256703    4409 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52975 <nil> <nil>}
	I0601 03:27:18.256716    4409 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 03:27:18.842422    4409 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 10:27:18.184801969 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0601 03:27:18.842446    4409 machine.go:91] provisioned docker machine in 1.859691833s
	I0601 03:27:18.842452    4409 client.go:171] LocalClient.Create took 8.656992893s
	I0601 03:27:18.842467    4409 start.go:173] duration metric: libmachine.API.Create for "ingress-addon-legacy-20220601032706-2342" took 8.657053052s
	I0601 03:27:18.842474    4409 start.go:306] post-start starting for "ingress-addon-legacy-20220601032706-2342" (driver="docker")
	I0601 03:27:18.842478    4409 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 03:27:18.842540    4409 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 03:27:18.842587    4409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:18.912682    4409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52975 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601032706-2342/id_rsa Username:docker}
	I0601 03:27:19.000005    4409 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 03:27:19.003494    4409 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 03:27:19.003513    4409 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 03:27:19.003522    4409 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 03:27:19.003528    4409 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 03:27:19.003537    4409 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 03:27:19.003656    4409 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 03:27:19.003783    4409 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 03:27:19.003789    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> /etc/ssl/certs/23422.pem
	I0601 03:27:19.003961    4409 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 03:27:19.010928    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 03:27:19.027537    4409 start.go:309] post-start completed in 185.053517ms
	I0601 03:27:19.028123    4409 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:19.097087    4409 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/config.json ...
	I0601 03:27:19.097507    4409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 03:27:19.097565    4409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:19.166795    4409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52975 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601032706-2342/id_rsa Username:docker}
	I0601 03:27:19.250077    4409 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 03:27:19.254372    4409 start.go:134] duration metric: createHost completed in 9.119340464s
	I0601 03:27:19.254388    4409 start.go:81] releasing machines lock for "ingress-addon-legacy-20220601032706-2342", held for 9.119438115s
	I0601 03:27:19.254457    4409 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:19.323587    4409 ssh_runner.go:195] Run: systemctl --version
	I0601 03:27:19.323590    4409 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 03:27:19.323674    4409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:19.323671    4409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:19.397944    4409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52975 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601032706-2342/id_rsa Username:docker}
	I0601 03:27:19.400120    4409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52975 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601032706-2342/id_rsa Username:docker}
	I0601 03:27:19.614356    4409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 03:27:19.624039    4409 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 03:27:19.633642    4409 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 03:27:19.633696    4409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 03:27:19.643005    4409 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 03:27:19.655879    4409 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 03:27:19.722788    4409 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 03:27:19.784303    4409 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 03:27:19.794651    4409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 03:27:19.859037    4409 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 03:27:19.869262    4409 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 03:27:19.903087    4409 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 03:27:19.984945    4409 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.16 ...
	I0601 03:27:19.985106    4409 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-20220601032706-2342 dig +short host.docker.internal
	I0601 03:27:20.113968    4409 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 03:27:20.114084    4409 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 03:27:20.118247    4409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 03:27:20.128108    4409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:27:20.201331    4409 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0601 03:27:20.201408    4409 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 03:27:20.230852    4409 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0601 03:27:20.230868    4409 docker.go:541] Images already preloaded, skipping extraction
	I0601 03:27:20.230945    4409 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 03:27:20.398830    4409 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0601 03:27:20.398858    4409 cache_images.go:84] Images are preloaded, skipping loading
	I0601 03:27:20.398922    4409 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 03:27:20.472924    4409 cni.go:95] Creating CNI manager for ""
	I0601 03:27:20.472937    4409 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 03:27:20.472951    4409 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 03:27:20.472965    4409 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-20220601032706-2342 NodeName:ingress-addon-legacy-20220601032706-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:syst
emd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 03:27:20.473082    4409 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-20220601032706-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 03:27:20.473158    4409 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-20220601032706-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220601032706-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 03:27:20.473218    4409 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0601 03:27:20.481324    4409 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 03:27:20.481386    4409 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 03:27:20.488549    4409 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0601 03:27:20.501364    4409 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0601 03:27:20.514624    4409 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2083 bytes)
	I0601 03:27:20.527228    4409 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 03:27:20.531270    4409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 03:27:20.540675    4409 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342 for IP: 192.168.49.2
	I0601 03:27:20.540805    4409 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 03:27:20.540859    4409 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 03:27:20.540903    4409 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/client.key
	I0601 03:27:20.540914    4409 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/client.crt with IP's: []
	I0601 03:27:20.696087    4409 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/client.crt ...
	I0601 03:27:20.696099    4409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/client.crt: {Name:mk76b25ad132ef6c0d0bb87b0a480d79bb01de96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:27:20.696406    4409 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/client.key ...
	I0601 03:27:20.696414    4409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/client.key: {Name:mkd32897ac4bc044b6c41ab589137e796109a165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:27:20.696620    4409 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.key.dd3b5fb2
	I0601 03:27:20.696639    4409 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 03:27:20.770095    4409 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.crt.dd3b5fb2 ...
	I0601 03:27:20.770105    4409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.crt.dd3b5fb2: {Name:mk7749f7debad3ce0cd8c7e9c393454056ae1b2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:27:20.770313    4409 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.key.dd3b5fb2 ...
	I0601 03:27:20.770321    4409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.key.dd3b5fb2: {Name:mk6579e808a6030a636f1c5101df65a0a3c54390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:27:20.770510    4409 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.crt
	I0601 03:27:20.770686    4409 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.key
	I0601 03:27:20.770843    4409 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/proxy-client.key
	I0601 03:27:20.770860    4409 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/proxy-client.crt with IP's: []
	I0601 03:27:20.871280    4409 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/proxy-client.crt ...
	I0601 03:27:20.871291    4409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/proxy-client.crt: {Name:mk0a7cbb2c69dec0056e5661d4b2f525129bd060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:27:20.871533    4409 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/proxy-client.key ...
	I0601 03:27:20.871553    4409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/proxy-client.key: {Name:mk09375135a4c3d7de805bac5c6bb5fff3e60c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:27:20.871741    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0601 03:27:20.871769    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0601 03:27:20.871793    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0601 03:27:20.871810    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0601 03:27:20.871829    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0601 03:27:20.871849    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0601 03:27:20.871864    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0601 03:27:20.871879    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0601 03:27:20.871979    4409 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 03:27:20.872024    4409 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 03:27:20.872034    4409 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 03:27:20.872071    4409 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 03:27:20.872102    4409 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 03:27:20.872131    4409 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 03:27:20.872198    4409 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 03:27:20.872230    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem -> /usr/share/ca-certificates/2342.pem
	I0601 03:27:20.872253    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> /usr/share/ca-certificates/23422.pem
	I0601 03:27:20.872270    4409 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0601 03:27:20.872725    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 03:27:20.891478    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 03:27:20.908494    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 03:27:20.925188    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601032706-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 03:27:20.942106    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 03:27:20.958885    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 03:27:20.976251    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 03:27:20.993127    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 03:27:21.009974    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 03:27:21.027349    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 03:27:21.044337    4409 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 03:27:21.062270    4409 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 03:27:21.074853    4409 ssh_runner.go:195] Run: openssl version
	I0601 03:27:21.080120    4409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 03:27:21.088184    4409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 03:27:21.092092    4409 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 03:27:21.092135    4409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 03:27:21.097406    4409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 03:27:21.104844    4409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 03:27:21.112802    4409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 03:27:21.116787    4409 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 03:27:21.116829    4409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 03:27:21.122194    4409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 03:27:21.129883    4409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 03:27:21.137335    4409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 03:27:21.141376    4409 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 03:27:21.141414    4409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 03:27:21.146703    4409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 03:27:21.154769    4409 kubeadm.go:395] StartCluster: {Name:ingress-addon-legacy-20220601032706-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220601032706-2342 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false}
	I0601 03:27:21.154936    4409 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 03:27:21.183027    4409 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 03:27:21.192102    4409 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 03:27:21.199311    4409 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 03:27:21.199360    4409 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 03:27:21.206826    4409 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 03:27:21.206859    4409 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 03:27:21.895097    4409 out.go:204]   - Generating certificates and keys ...
	I0601 03:27:24.914620    4409 out.go:204]   - Booting up control plane ...
	W0601 03:29:19.835163    4409 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220601032706-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220601032706-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 10:27:21.253759     832 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:27:24.904139     832 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:27:24.905028     832 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220601032706-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220601032706-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 10:27:21.253759     832 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:27:24.904139     832 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:27:24.905028     832 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 03:29:19.835200    4409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 03:29:20.258083    4409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 03:29:20.267425    4409 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 03:29:20.267472    4409 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 03:29:20.274481    4409 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 03:29:20.274506    4409 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 03:29:20.982667    4409 out.go:204]   - Generating certificates and keys ...
	I0601 03:29:21.794673    4409 out.go:204]   - Booting up control plane ...
	I0601 03:31:16.736881    4409 kubeadm.go:397] StartCluster complete in 3m55.581081556s
	I0601 03:31:16.736966    4409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 03:31:16.766090    4409 logs.go:274] 0 containers: []
	W0601 03:31:16.766102    4409 logs.go:276] No container was found matching "kube-apiserver"
	I0601 03:31:16.766155    4409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 03:31:16.796766    4409 logs.go:274] 0 containers: []
	W0601 03:31:16.796778    4409 logs.go:276] No container was found matching "etcd"
	I0601 03:31:16.796834    4409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 03:31:16.828290    4409 logs.go:274] 0 containers: []
	W0601 03:31:16.828301    4409 logs.go:276] No container was found matching "coredns"
	I0601 03:31:16.828367    4409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 03:31:16.857490    4409 logs.go:274] 0 containers: []
	W0601 03:31:16.857502    4409 logs.go:276] No container was found matching "kube-scheduler"
	I0601 03:31:16.857557    4409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 03:31:16.887624    4409 logs.go:274] 0 containers: []
	W0601 03:31:16.887637    4409 logs.go:276] No container was found matching "kube-proxy"
	I0601 03:31:16.887697    4409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 03:31:16.916522    4409 logs.go:274] 0 containers: []
	W0601 03:31:16.916536    4409 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 03:31:16.916614    4409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 03:31:16.945154    4409 logs.go:274] 0 containers: []
	W0601 03:31:16.945168    4409 logs.go:276] No container was found matching "storage-provisioner"
	I0601 03:31:16.945224    4409 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 03:31:16.972786    4409 logs.go:274] 0 containers: []
	W0601 03:31:16.972798    4409 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 03:31:16.972811    4409 logs.go:123] Gathering logs for kubelet ...
	I0601 03:31:16.972819    4409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 03:31:17.014363    4409 logs.go:123] Gathering logs for dmesg ...
	I0601 03:31:17.014376    4409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 03:31:17.026926    4409 logs.go:123] Gathering logs for describe nodes ...
	I0601 03:31:17.026937    4409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 03:31:17.077399    4409 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 03:31:17.077410    4409 logs.go:123] Gathering logs for Docker ...
	I0601 03:31:17.077416    4409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 03:31:17.090712    4409 logs.go:123] Gathering logs for container status ...
	I0601 03:31:17.090727    4409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 03:31:19.142465    4409 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051717081s)
	W0601 03:31:19.142601    4409 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 10:29:20.319799    3332 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:29:21.802291    3332 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:29:21.803677    3332 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 03:31:19.142619    4409 out.go:239] * 
	* 
	W0601 03:31:19.142743    4409 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 10:29:20.319799    3332 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:29:21.802291    3332 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:29:21.803677    3332 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 10:29:20.319799    3332 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:29:21.802291    3332 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:29:21.803677    3332 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 03:31:19.142763    4409 out.go:239] * 
	* 
	W0601 03:31:19.143320    4409 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 03:31:19.206220    4409 out.go:177] 
	W0601 03:31:19.269223    4409 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 10:29:20.319799    3332 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:29:21.802291    3332 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:29:21.803677    3332 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 10:29:20.319799    3332 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:29:21.802291    3332 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:29:21.803677    3332 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 03:31:19.269332    4409 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 03:31:19.269375    4409 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 03:31:19.290954    4409 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220601032706-2342 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (252.92s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220601032706-2342 addons enable ingress --alsologtostderr -v=5
E0601 03:31:30.557832    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:31:49.014252    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:32:11.518393    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:32:16.705912    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220601032706-2342 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.073007099s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 03:31:19.454973    4567 out.go:296] Setting OutFile to fd 1 ...
	I0601 03:31:19.455454    4567 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:31:19.455460    4567 out.go:309] Setting ErrFile to fd 2...
	I0601 03:31:19.455464    4567 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:31:19.455562    4567 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 03:31:19.456006    4567 config.go:178] Loaded profile config "ingress-addon-legacy-20220601032706-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0601 03:31:19.456019    4567 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220601032706-2342"
	I0601 03:31:19.456025    4567 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220601032706-2342"
	I0601 03:31:19.456273    4567 host.go:66] Checking if "ingress-addon-legacy-20220601032706-2342" exists ...
	I0601 03:31:19.456733    4567 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601032706-2342 --format={{.State.Status}}
	I0601 03:31:19.543828    4567 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0601 03:31:19.565877    4567 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0601 03:31:19.587692    4567 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0601 03:31:19.609510    4567 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0601 03:31:19.630570    4567 addons.go:348] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0601 03:31:19.630602    4567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I0601 03:31:19.630726    4567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:31:19.697898    4567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52975 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601032706-2342/id_rsa Username:docker}
	I0601 03:31:19.790032    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:31:19.843222    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:19.843247    4567 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:20.121196    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:31:20.171510    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:20.171527    4567 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:20.714072    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:31:20.764947    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:20.764963    4567 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:21.422372    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:31:21.474141    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:21.474159    4567 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:22.265748    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:31:22.320826    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:22.320840    4567 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:23.491994    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:31:23.545483    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:23.545497    4567 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:25.798873    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:31:25.849032    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:25.849049    4567 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:27.462160    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:31:27.513499    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:27.513516    4567 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:30.318042    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:31:30.369211    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:30.369228    4567 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:34.196491    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:31:34.247083    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:34.247096    4567 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:41.944984    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:31:41.998259    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:41.998273    4567 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:56.634224    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:31:56.684287    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:31:56.684300    4567 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:25.093370    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:32:25.143505    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:25.143519    4567 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:48.313012    4567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 03:32:48.363932    4567 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:48.363965    4567 addons.go:386] Verifying addon ingress=true in "ingress-addon-legacy-20220601032706-2342"
	I0601 03:32:48.385764    4567 out.go:177] * Verifying ingress addon...
	I0601 03:32:48.408837    4567 out.go:177] 
	W0601 03:32:48.430845    4567 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220601032706-2342" does not exist: client config: context "ingress-addon-legacy-20220601032706-2342" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220601032706-2342" does not exist: client config: context "ingress-addon-legacy-20220601032706-2342" does not exist]
	W0601 03:32:48.430877    4567 out.go:239] * 
	* 
	W0601 03:32:48.433852    4567 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 03:32:48.455543    4567 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220601032706-2342
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220601032706-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417",
	        "Created": "2022-06-01T10:27:15.79266552Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28741,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T10:27:16.093998409Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417/hosts",
	        "LogPath": "/var/lib/docker/containers/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417-json.log",
	        "Name": "/ingress-addon-legacy-20220601032706-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220601032706-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220601032706-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/03801ef5de03a15b1444aacb64a8dfd2b6ea3ed85fde3b4a7513d37ce7623c08-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03801ef5de03a15b1444aacb64a8dfd2b6ea3ed85fde3b4a7513d37ce7623c08/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03801ef5de03a15b1444aacb64a8dfd2b6ea3ed85fde3b4a7513d37ce7623c08/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03801ef5de03a15b1444aacb64a8dfd2b6ea3ed85fde3b4a7513d37ce7623c08/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220601032706-2342",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220601032706-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220601032706-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220601032706-2342",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220601032706-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d7869177f0d9aa5a62c1db2d479336290ac2898138cd28881fb444f6cfb0ae68",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52975"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52976"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52973"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52974"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d7869177f0d9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220601032706-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e1ff2a0c93d2",
	                        "ingress-addon-legacy-20220601032706-2342"
	                    ],
	                    "NetworkID": "796b13810dcecfa43908c2ede00b3271b76971653088c033da8662d0cc57a18b",
	                    "EndpointID": "9a8f1b69820286c6c15b4bf642796f6eee39687f1440615f028e569f22eb3e3d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220601032706-2342 -n ingress-addon-legacy-20220601032706-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220601032706-2342 -n ingress-addon-legacy-20220601032706-2342: exit status 6 (438.350106ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 03:32:48.976600    4592 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220601032706-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220601032706-2342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220601032706-2342 addons enable ingress-dns --alsologtostderr -v=5
E0601 03:33:33.441172    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220601032706-2342 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.008884893s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 03:32:49.036420    4602 out.go:296] Setting OutFile to fd 1 ...
	I0601 03:32:49.036716    4602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:32:49.036722    4602 out.go:309] Setting ErrFile to fd 2...
	I0601 03:32:49.036730    4602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:32:49.036844    4602 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 03:32:49.037272    4602 config.go:178] Loaded profile config "ingress-addon-legacy-20220601032706-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0601 03:32:49.037286    4602 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-20220601032706-2342"
	I0601 03:32:49.037292    4602 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-20220601032706-2342"
	I0601 03:32:49.037542    4602 host.go:66] Checking if "ingress-addon-legacy-20220601032706-2342" exists ...
	I0601 03:32:49.038028    4602 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601032706-2342 --format={{.State.Status}}
	I0601 03:32:49.126570    4602 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0601 03:32:49.148530    4602 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0601 03:32:49.170220    4602 addons.go:348] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0601 03:32:49.170256    4602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0601 03:32:49.170460    4602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601032706-2342
	I0601 03:32:49.238178    4602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52975 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601032706-2342/id_rsa Username:docker}
	I0601 03:32:49.330129    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:32:49.377400    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:49.377421    4602 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:49.653805    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:32:49.704248    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:49.704264    4602 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:50.244753    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:32:50.298705    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:50.298722    4602 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:50.956049    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:32:51.010081    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:51.010096    4602 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:51.802724    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:32:51.853437    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:51.853452    4602 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:53.026022    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:32:53.076978    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:53.076992    4602 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:55.332433    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:32:55.384284    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:55.384298    4602 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:56.996171    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:32:57.049705    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:57.049721    4602 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:59.856279    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:32:59.908386    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:32:59.908413    4602 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:33:03.735658    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:33:03.786717    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:33:03.786732    4602 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:33:11.484565    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:33:11.536108    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:33:11.536124    4602 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:33:26.172429    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:33:26.223877    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:33:26.223896    4602 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:33:54.633012    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:33:54.682279    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:33:54.682293    4602 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:34:17.852923    4602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 03:34:17.904078    4602 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 03:34:17.926272    4602 out.go:177] 
	W0601 03:34:17.948044    4602 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0601 03:34:17.948070    4602 out.go:239] * 
	* 
	W0601 03:34:17.951067    4602 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 03:34:17.971982    4602 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220601032706-2342
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220601032706-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417",
	        "Created": "2022-06-01T10:27:15.79266552Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28741,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T10:27:16.093998409Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417/hosts",
	        "LogPath": "/var/lib/docker/containers/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417-json.log",
	        "Name": "/ingress-addon-legacy-20220601032706-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220601032706-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220601032706-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/03801ef5de03a15b1444aacb64a8dfd2b6ea3ed85fde3b4a7513d37ce7623c08-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03801ef5de03a15b1444aacb64a8dfd2b6ea3ed85fde3b4a7513d37ce7623c08/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03801ef5de03a15b1444aacb64a8dfd2b6ea3ed85fde3b4a7513d37ce7623c08/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03801ef5de03a15b1444aacb64a8dfd2b6ea3ed85fde3b4a7513d37ce7623c08/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220601032706-2342",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220601032706-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220601032706-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220601032706-2342",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220601032706-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d7869177f0d9aa5a62c1db2d479336290ac2898138cd28881fb444f6cfb0ae68",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52975"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52976"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52973"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52974"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d7869177f0d9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220601032706-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e1ff2a0c93d2",
	                        "ingress-addon-legacy-20220601032706-2342"
	                    ],
	                    "NetworkID": "796b13810dcecfa43908c2ede00b3271b76971653088c033da8662d0cc57a18b",
	                    "EndpointID": "9a8f1b69820286c6c15b4bf642796f6eee39687f1440615f028e569f22eb3e3d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220601032706-2342 -n ingress-addon-legacy-20220601032706-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220601032706-2342 -n ingress-addon-legacy-20220601032706-2342: exit status 6 (430.635119ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 03:34:18.486543    4627 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220601032706-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220601032706-2342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:156: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220601032706-2342
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220601032706-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417",
	        "Created": "2022-06-01T10:27:15.79266552Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28741,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T10:27:16.093998409Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417/hosts",
	        "LogPath": "/var/lib/docker/containers/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417/e1ff2a0c93d201cf00233813b6e5455148db4281282b5db298627e9ad46d3417-json.log",
	        "Name": "/ingress-addon-legacy-20220601032706-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220601032706-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220601032706-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/03801ef5de03a15b1444aacb64a8dfd2b6ea3ed85fde3b4a7513d37ce7623c08-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03801ef5de03a15b1444aacb64a8dfd2b6ea3ed85fde3b4a7513d37ce7623c08/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03801ef5de03a15b1444aacb64a8dfd2b6ea3ed85fde3b4a7513d37ce7623c08/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03801ef5de03a15b1444aacb64a8dfd2b6ea3ed85fde3b4a7513d37ce7623c08/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220601032706-2342",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220601032706-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220601032706-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220601032706-2342",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220601032706-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d7869177f0d9aa5a62c1db2d479336290ac2898138cd28881fb444f6cfb0ae68",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52975"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52976"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52973"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52974"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d7869177f0d9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220601032706-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e1ff2a0c93d2",
	                        "ingress-addon-legacy-20220601032706-2342"
	                    ],
	                    "NetworkID": "796b13810dcecfa43908c2ede00b3271b76971653088c033da8662d0cc57a18b",
	                    "EndpointID": "9a8f1b69820286c6c15b4bf642796f6eee39687f1440615f028e569f22eb3e3d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220601032706-2342 -n ingress-addon-legacy-20220601032706-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220601032706-2342 -n ingress-addon-legacy-20220601032706-2342: exit status 6 (430.380675ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 03:34:18.989136    4641 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220601032706-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220601032706-2342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.50s)

                                                
                                    
x
+
TestPreload (264.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220601034553-2342 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0601 03:46:49.042563    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:47:12.669795    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
preload_test.go:48: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20220601034553-2342 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 109 (4m21.02046326s)

                                                
                                                
-- stdout --
	* [test-preload-20220601034553-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node test-preload-20220601034553-2342 in cluster test-preload-20220601034553-2342
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.17.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 03:45:53.720927    7741 out.go:296] Setting OutFile to fd 1 ...
	I0601 03:45:53.721146    7741 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:45:53.721152    7741 out.go:309] Setting ErrFile to fd 2...
	I0601 03:45:53.721156    7741 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:45:53.721263    7741 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 03:45:53.721577    7741 out.go:303] Setting JSON to false
	I0601 03:45:53.736488    7741 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":2723,"bootTime":1654077630,"procs":346,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 03:45:53.736577    7741 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 03:45:53.758968    7741 out.go:177] * [test-preload-20220601034553-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 03:45:53.802469    7741 notify.go:193] Checking for updates...
	I0601 03:45:53.824462    7741 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 03:45:53.846593    7741 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 03:45:53.868468    7741 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 03:45:53.890735    7741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 03:45:53.912580    7741 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 03:45:53.934682    7741 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 03:45:54.007356    7741 docker.go:137] docker version: linux-20.10.14
	I0601 03:45:54.007490    7741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 03:45:54.135576    7741 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 10:45:54.070526018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 03:45:54.179462    7741 out.go:177] * Using the docker driver based on user configuration
	I0601 03:45:54.201072    7741 start.go:284] selected driver: docker
	I0601 03:45:54.201097    7741 start.go:806] validating driver "docker" against <nil>
	I0601 03:45:54.201118    7741 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 03:45:54.203618    7741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 03:45:54.330098    7741 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 10:45:54.266161311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 03:45:54.330235    7741 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 03:45:54.330398    7741 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 03:45:54.353324    7741 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 03:45:54.374926    7741 cni.go:95] Creating CNI manager for ""
	I0601 03:45:54.374959    7741 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 03:45:54.374970    7741 start_flags.go:306] config:
	{Name:test-preload-20220601034553-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220601034553-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 03:45:54.397147    7741 out.go:177] * Starting control plane node test-preload-20220601034553-2342 in cluster test-preload-20220601034553-2342
	I0601 03:45:54.439939    7741 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 03:45:54.461057    7741 out.go:177] * Pulling base image ...
	I0601 03:45:54.502997    7741 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0601 03:45:54.503071    7741 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 03:45:54.503369    7741 cache.go:107] acquiring lock: {Name:mk6cdcb3277425415932624173a7b7ca3460ec43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:45:54.503373    7741 cache.go:107] acquiring lock: {Name:mk7c0941a6ed1ce093a3b04ab220d9c7a1c273be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:45:54.504724    7741 cache.go:107] acquiring lock: {Name:mk96836ce61e787a868fb2b30b95e488f627a436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:45:54.505314    7741 cache.go:107] acquiring lock: {Name:mk20c68605624e99b9b5e30c51dd514c0ab06314 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:45:54.505425    7741 cache.go:107] acquiring lock: {Name:mk17e5d41685a9c9bfe40771fa97f28d234f06eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:45:54.505598    7741 cache.go:107] acquiring lock: {Name:mke6416fbc00b67e9a5f50736efd1d35748f9c49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:45:54.505670    7741 cache.go:107] acquiring lock: {Name:mk9f4c55757a82f8ed223989ac21c33d9935d242 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:45:54.506604    7741 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/config.json ...
	I0601 03:45:54.506680    7741 cache.go:107] acquiring lock: {Name:mk57b38976ca1498f8b1d64bc321ac77b155e979 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:45:54.506708    7741 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0601 03:45:54.506541    7741 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0601 03:45:54.506698    7741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/config.json: {Name:mk9b585a5dca6c2f7c822593805dd28807265a03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:45:54.506751    7741 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0601 03:45:54.506670    7741 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0601 03:45:54.506534    7741 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0601 03:45:54.506672    7741 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0601 03:45:54.506782    7741 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.431857ms
	I0601 03:45:54.506663    7741 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0601 03:45:54.506814    7741 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0601 03:45:54.506999    7741 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0601 03:45:54.512824    7741 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error response from daemon: reference does not exist
	I0601 03:45:54.514045    7741 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
	I0601 03:45:54.514796    7741 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0601 03:45:54.515265    7741 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error response from daemon: reference does not exist
	I0601 03:45:54.515273    7741 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
	I0601 03:45:54.515332    7741 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error response from daemon: reference does not exist
	I0601 03:45:54.515814    7741 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error response from daemon: reference does not exist
	I0601 03:45:54.578102    7741 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 03:45:54.578126    7741 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 03:45:54.578140    7741 cache.go:206] Successfully downloaded all kic artifacts
	I0601 03:45:54.578183    7741 start.go:352] acquiring machines lock for test-preload-20220601034553-2342: {Name:mk73ee398b706c588115374e56d4a2881e52fefc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:45:54.578318    7741 start.go:356] acquired machines lock for "test-preload-20220601034553-2342" in 123.668µs
	I0601 03:45:54.578341    7741 start.go:91] Provisioning new machine with config: &{Name:test-preload-20220601034553-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220601034553-2342 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 03:45:54.578446    7741 start.go:131] createHost starting for "" (driver="docker")
	I0601 03:45:54.621003    7741 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 03:45:54.621207    7741 start.go:165] libmachine.API.Create for "test-preload-20220601034553-2342" (driver="docker")
	I0601 03:45:54.621231    7741 client.go:168] LocalClient.Create starting
	I0601 03:45:54.621288    7741 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 03:45:54.621319    7741 main.go:134] libmachine: Decoding PEM data...
	I0601 03:45:54.621335    7741 main.go:134] libmachine: Parsing certificate...
	I0601 03:45:54.621427    7741 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 03:45:54.621452    7741 main.go:134] libmachine: Decoding PEM data...
	I0601 03:45:54.621464    7741 main.go:134] libmachine: Parsing certificate...
	I0601 03:45:54.621891    7741 cli_runner.go:164] Run: docker network inspect test-preload-20220601034553-2342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 03:45:54.689932    7741 cli_runner.go:211] docker network inspect test-preload-20220601034553-2342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 03:45:54.689995    7741 network_create.go:272] running [docker network inspect test-preload-20220601034553-2342] to gather additional debugging logs...
	I0601 03:45:54.690010    7741 cli_runner.go:164] Run: docker network inspect test-preload-20220601034553-2342
	W0601 03:45:54.754044    7741 cli_runner.go:211] docker network inspect test-preload-20220601034553-2342 returned with exit code 1
	I0601 03:45:54.754063    7741 network_create.go:275] error running [docker network inspect test-preload-20220601034553-2342]: docker network inspect test-preload-20220601034553-2342: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220601034553-2342
	I0601 03:45:54.754076    7741 network_create.go:277] output of [docker network inspect test-preload-20220601034553-2342]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220601034553-2342
	
	** /stderr **
	I0601 03:45:54.754138    7741 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 03:45:54.818050    7741 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007aa008] misses:0}
	I0601 03:45:54.818090    7741 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 03:45:54.818104    7741 network_create.go:115] attempt to create docker network test-preload-20220601034553-2342 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 03:45:54.818160    7741 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601034553-2342
	I0601 03:45:54.912772    7741 network_create.go:99] docker network test-preload-20220601034553-2342 192.168.49.0/24 created
	I0601 03:45:54.912796    7741 kic.go:106] calculated static IP "192.168.49.2" for the "test-preload-20220601034553-2342" container
	I0601 03:45:54.912867    7741 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 03:45:54.976271    7741 cli_runner.go:164] Run: docker volume create test-preload-20220601034553-2342 --label name.minikube.sigs.k8s.io=test-preload-20220601034553-2342 --label created_by.minikube.sigs.k8s.io=true
	I0601 03:45:54.987942    7741 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0601 03:45:54.987997    7741 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0601 03:45:54.989939    7741 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0601 03:45:55.014419    7741 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0601 03:45:55.031217    7741 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0601 03:45:55.040120    7741 oci.go:103] Successfully created a docker volume test-preload-20220601034553-2342
	I0601 03:45:55.040188    7741 cli_runner.go:164] Run: docker run --rm --name test-preload-20220601034553-2342-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220601034553-2342 --entrypoint /usr/bin/test -v test-preload-20220601034553-2342:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 03:45:55.060664    7741 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0601 03:45:55.083102    7741 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0601 03:45:55.099298    7741 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0601 03:45:55.099320    7741 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 594.669442ms
	I0601 03:45:55.099330    7741 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0601 03:45:55.412768    7741 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
	I0601 03:45:55.412785    7741 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 907.464507ms
	I0601 03:45:55.412794    7741 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
	I0601 03:45:55.542231    7741 oci.go:107] Successfully prepared a docker volume test-preload-20220601034553-2342
	I0601 03:45:55.542277    7741 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0601 03:45:55.542339    7741 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 03:45:55.677668    7741 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-20220601034553-2342 --name test-preload-20220601034553-2342 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220601034553-2342 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-20220601034553-2342 --network test-preload-20220601034553-2342 --ip 192.168.49.2 --volume test-preload-20220601034553-2342:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 03:45:55.752363    7741 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0601 03:45:55.752392    7741 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 1.249046896s
	I0601 03:45:55.752418    7741 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0601 03:45:55.858552    7741 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0601 03:45:55.858591    7741 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 1.355122207s
	I0601 03:45:55.858609    7741 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0601 03:45:55.871763    7741 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0601 03:45:55.871783    7741 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 1.36833303s
	I0601 03:45:55.871791    7741 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0601 03:45:56.107196    7741 cli_runner.go:164] Run: docker container inspect test-preload-20220601034553-2342 --format={{.State.Running}}
	I0601 03:45:56.132757    7741 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
	I0601 03:45:56.132780    7741 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 1.627415408s
	I0601 03:45:56.132813    7741 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0601 03:45:56.181394    7741 cli_runner.go:164] Run: docker container inspect test-preload-20220601034553-2342 --format={{.State.Status}}
	I0601 03:45:56.221783    7741 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0601 03:45:56.221808    7741 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 1.716309881s
	I0601 03:45:56.221821    7741 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0601 03:45:56.221838    7741 cache.go:87] Successfully saved all images to host disk.
	I0601 03:45:56.264263    7741 cli_runner.go:164] Run: docker exec test-preload-20220601034553-2342 stat /var/lib/dpkg/alternatives/iptables
	I0601 03:45:56.385193    7741 oci.go:247] the created container "test-preload-20220601034553-2342" has a running status.
	I0601 03:45:56.385223    7741 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601034553-2342/id_rsa...
	I0601 03:45:56.603605    7741 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601034553-2342/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 03:45:56.773483    7741 cli_runner.go:164] Run: docker container inspect test-preload-20220601034553-2342 --format={{.State.Status}}
	I0601 03:45:56.841379    7741 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 03:45:56.841398    7741 kic_runner.go:114] Args: [docker exec --privileged test-preload-20220601034553-2342 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 03:45:56.965103    7741 cli_runner.go:164] Run: docker container inspect test-preload-20220601034553-2342 --format={{.State.Status}}
	I0601 03:45:57.033287    7741 machine.go:88] provisioning docker machine ...
	I0601 03:45:57.033333    7741 ubuntu.go:169] provisioning hostname "test-preload-20220601034553-2342"
	I0601 03:45:57.033430    7741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601034553-2342
	I0601 03:45:57.103065    7741 main.go:134] libmachine: Using SSH client type: native
	I0601 03:45:57.103279    7741 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58808 <nil> <nil>}
	I0601 03:45:57.103303    7741 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-20220601034553-2342 && echo "test-preload-20220601034553-2342" | sudo tee /etc/hostname
	I0601 03:45:57.259414    7741 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-20220601034553-2342
	
	I0601 03:45:57.259490    7741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601034553-2342
	I0601 03:45:57.329032    7741 main.go:134] libmachine: Using SSH client type: native
	I0601 03:45:57.329194    7741 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58808 <nil> <nil>}
	I0601 03:45:57.329232    7741 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20220601034553-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20220601034553-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20220601034553-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 03:45:57.449579    7741 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 03:45:57.449600    7741 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 03:45:57.449622    7741 ubuntu.go:177] setting up certificates
	I0601 03:45:57.449629    7741 provision.go:83] configureAuth start
	I0601 03:45:57.449696    7741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220601034553-2342
	I0601 03:45:57.518019    7741 provision.go:138] copyHostCerts
	I0601 03:45:57.518114    7741 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 03:45:57.518138    7741 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 03:45:57.518242    7741 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 03:45:57.518465    7741 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 03:45:57.518479    7741 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 03:45:57.518572    7741 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 03:45:57.518749    7741 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 03:45:57.518758    7741 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 03:45:57.518822    7741 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 03:45:57.518947    7741 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.test-preload-20220601034553-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20220601034553-2342]
	I0601 03:45:57.589927    7741 provision.go:172] copyRemoteCerts
	I0601 03:45:57.589979    7741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 03:45:57.590024    7741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601034553-2342
	I0601 03:45:57.660181    7741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58808 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601034553-2342/id_rsa Username:docker}
	I0601 03:45:57.747511    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 03:45:57.763871    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0601 03:45:57.780638    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 03:45:57.797931    7741 provision.go:86] duration metric: configureAuth took 348.248326ms
	I0601 03:45:57.797959    7741 ubuntu.go:193] setting minikube options for container-runtime
	I0601 03:45:57.798142    7741 config.go:178] Loaded profile config "test-preload-20220601034553-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0601 03:45:57.798267    7741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601034553-2342
	I0601 03:45:57.868204    7741 main.go:134] libmachine: Using SSH client type: native
	I0601 03:45:57.868417    7741 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58808 <nil> <nil>}
	I0601 03:45:57.868465    7741 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 03:45:57.992152    7741 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 03:45:57.992165    7741 ubuntu.go:71] root file system type: overlay
	I0601 03:45:57.992308    7741 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 03:45:57.992396    7741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601034553-2342
	I0601 03:45:58.060578    7741 main.go:134] libmachine: Using SSH client type: native
	I0601 03:45:58.060726    7741 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58808 <nil> <nil>}
	I0601 03:45:58.060778    7741 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 03:45:58.188852    7741 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 03:45:58.188936    7741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601034553-2342
	I0601 03:45:58.257944    7741 main.go:134] libmachine: Using SSH client type: native
	I0601 03:45:58.258117    7741 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58808 <nil> <nil>}
	I0601 03:45:58.258132    7741 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 03:45:58.830642    7741 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 10:45:58.194413229 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0601 03:45:58.830662    7741 machine.go:91] provisioned docker machine in 1.797344666s
	I0601 03:45:58.830668    7741 client.go:171] LocalClient.Create took 4.209404997s
	I0601 03:45:58.830707    7741 start.go:173] duration metric: libmachine.API.Create for "test-preload-20220601034553-2342" took 4.209469749s
	I0601 03:45:58.830715    7741 start.go:306] post-start starting for "test-preload-20220601034553-2342" (driver="docker")
	I0601 03:45:58.830719    7741 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 03:45:58.830803    7741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 03:45:58.830875    7741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601034553-2342
	I0601 03:45:58.902298    7741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58808 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601034553-2342/id_rsa Username:docker}
	I0601 03:45:58.988095    7741 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 03:45:58.991303    7741 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 03:45:58.991319    7741 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 03:45:58.991325    7741 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 03:45:58.991348    7741 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 03:45:58.991359    7741 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 03:45:58.991465    7741 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 03:45:58.991595    7741 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 03:45:58.991734    7741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 03:45:58.998849    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 03:45:59.015936    7741 start.go:309] post-start completed in 185.211622ms
	I0601 03:45:59.016805    7741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220601034553-2342
	I0601 03:45:59.086018    7741 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/config.json ...
	I0601 03:45:59.086422    7741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 03:45:59.086496    7741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601034553-2342
	I0601 03:45:59.155261    7741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58808 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601034553-2342/id_rsa Username:docker}
	I0601 03:45:59.239731    7741 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 03:45:59.244097    7741 start.go:134] duration metric: createHost completed in 4.665604128s
	I0601 03:45:59.244116    7741 start.go:81] releasing machines lock for "test-preload-20220601034553-2342", held for 4.665757302s
	I0601 03:45:59.244191    7741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220601034553-2342
	I0601 03:45:59.313203    7741 ssh_runner.go:195] Run: systemctl --version
	I0601 03:45:59.313206    7741 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 03:45:59.313284    7741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601034553-2342
	I0601 03:45:59.313283    7741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601034553-2342
	I0601 03:45:59.387779    7741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58808 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601034553-2342/id_rsa Username:docker}
	I0601 03:45:59.388408    7741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58808 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601034553-2342/id_rsa Username:docker}
	I0601 03:45:59.607363    7741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 03:45:59.616550    7741 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 03:45:59.625716    7741 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 03:45:59.625783    7741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 03:45:59.634656    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 03:45:59.647159    7741 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 03:45:59.726215    7741 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 03:45:59.792112    7741 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 03:45:59.802134    7741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 03:45:59.872135    7741 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 03:45:59.882432    7741 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 03:45:59.917350    7741 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 03:45:59.996986    7741 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 20.10.16 ...
	I0601 03:45:59.997147    7741 cli_runner.go:164] Run: docker exec -t test-preload-20220601034553-2342 dig +short host.docker.internal
	I0601 03:46:00.130563    7741 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 03:46:00.130664    7741 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 03:46:00.135374    7741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 03:46:00.145397    7741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" test-preload-20220601034553-2342
	I0601 03:46:00.214681    7741 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0601 03:46:00.214745    7741 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 03:46:00.242378    7741 docker.go:610] Got preloaded images: 
	I0601 03:46:00.242392    7741 docker.go:616] k8s.gcr.io/kube-apiserver:v1.17.0 wasn't preloaded
	I0601 03:46:00.242397    7741 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0601 03:46:00.249957    7741 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 03:46:00.250160    7741 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0601 03:46:00.250701    7741 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0601 03:46:00.251084    7741 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0601 03:46:00.251757    7741 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0601 03:46:00.251955    7741 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0601 03:46:00.252413    7741 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0601 03:46:00.253557    7741 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0601 03:46:00.257024    7741 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error response from daemon: reference does not exist
	I0601 03:46:00.258218    7741 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
	I0601 03:46:00.258401    7741 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error response from daemon: reference does not exist
	I0601 03:46:00.259495    7741 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
	I0601 03:46:00.259999    7741 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error response from daemon: reference does not exist
	I0601 03:46:00.260270    7741 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
	I0601 03:46:00.260427    7741 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0601 03:46:00.260796    7741 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error response from daemon: reference does not exist
	I0601 03:46:00.648288    7741 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
	I0601 03:46:00.679221    7741 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.17.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I0601 03:46:00.679252    7741 docker.go:291] Removing image: k8s.gcr.io/kube-proxy:v1.17.0
	I0601 03:46:00.679300    7741 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.17.0
	I0601 03:46:00.708526    7741 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0601 03:46:00.708689    7741 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I0601 03:46:00.711142    7741 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
	I0601 03:46:00.712842    7741 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.0': No such file or directory
	I0601 03:46:00.712861    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I0601 03:46:00.730586    7741 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0601 03:46:00.733210    7741 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
	I0601 03:46:00.761931    7741 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.17.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I0601 03:46:00.761960    7741 docker.go:291] Removing image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0601 03:46:00.762016    7741 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.17.0
	I0601 03:46:00.782743    7741 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0601 03:46:00.821349    7741 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0601 03:46:00.821544    7741 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.17.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I0601 03:46:00.821572    7741 docker.go:291] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0601 03:46:00.821633    7741 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.17.0
	I0601 03:46:00.828356    7741 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
	I0601 03:46:00.831903    7741 cache_images.go:116] "k8s.gcr.io/coredns:1.6.5" needs transfer: "k8s.gcr.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I0601 03:46:00.831930    7741 docker.go:291] Removing image: k8s.gcr.io/coredns:1.6.5
	I0601 03:46:00.831981    7741 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns:1.6.5
	I0601 03:46:00.850629    7741 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0601 03:46:00.850747    7741 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0601 03:46:00.904527    7741 cache_images.go:116] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0601 03:46:00.904563    7741 docker.go:291] Removing image: k8s.gcr.io/pause:3.1
	I0601 03:46:00.904660    7741 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.1
	I0601 03:46:00.911675    7741 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 03:46:00.954846    7741 cache_images.go:116] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0601 03:46:00.954870    7741 docker.go:291] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0601 03:46:00.954925    7741 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.4.3-0
	I0601 03:46:00.978150    7741 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0601 03:46:00.978179    7741 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.17.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I0601 03:46:00.978215    7741 docker.go:291] Removing image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0601 03:46:00.978274    7741 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.17.0
	I0601 03:46:00.978294    7741 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0601 03:46:00.987473    7741 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.0': No such file or directory
	I0601 03:46:00.987510    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I0601 03:46:00.988466    7741 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0601 03:46:00.988606    7741 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I0601 03:46:01.044064    7741 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0601 03:46:01.044131    7741 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0601 03:46:01.044157    7741 docker.go:291] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 03:46:01.044197    7741 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0601 03:46:01.044208    7741 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 03:46:01.062823    7741 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.0': No such file or directory
	I0601 03:46:01.062855    7741 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0601 03:46:01.062867    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I0601 03:46:01.062976    7741 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I0601 03:46:01.102714    7741 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0601 03:46:01.102737    7741 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_1.6.5: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.6.5': No such file or directory
	I0601 03:46:01.102761    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I0601 03:46:01.102857    7741 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0601 03:46:01.164806    7741 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0601 03:46:01.164806    7741 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0601 03:46:01.164850    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0601 03:46:01.164856    7741 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory
	I0601 03:46:01.164878    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0601 03:46:01.165034    7741 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0601 03:46:01.180835    7741 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.0': No such file or directory
	I0601 03:46:01.180871    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I0601 03:46:01.238911    7741 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0601 03:46:01.238955    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0601 03:46:01.325072    7741 docker.go:258] Loading image: /var/lib/minikube/images/pause_3.1
	I0601 03:46:01.325098    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I0601 03:46:01.584498    7741 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 from cache
	I0601 03:46:02.346708    7741 docker.go:258] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0601 03:46:02.346723    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0601 03:46:03.014436    7741 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0601 03:46:03.014470    7741 docker.go:258] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I0601 03:46:03.014505    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I0601 03:46:03.940543    7741 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 from cache
	I0601 03:46:03.940567    7741 docker.go:258] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I0601 03:46:03.940585    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I0601 03:46:06.510422    7741 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load": (2.569804019s)
	I0601 03:46:06.510435    7741 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 from cache
	I0601 03:46:06.510456    7741 docker.go:258] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0601 03:46:06.510467    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I0601 03:46:07.006714    7741 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 from cache
	I0601 03:46:07.006747    7741 docker.go:258] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0601 03:46:07.006761    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I0601 03:46:08.141411    7741 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load": (1.134627783s)
	I0601 03:46:08.141425    7741 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 from cache
	I0601 03:46:08.141451    7741 docker.go:258] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0601 03:46:08.141481    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I0601 03:46:09.272767    7741 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load": (1.131241778s)
	I0601 03:46:09.272783    7741 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 from cache
	I0601 03:46:09.272840    7741 docker.go:258] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0601 03:46:09.272848    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I0601 03:46:12.285392    7741 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load": (3.012509159s)
	I0601 03:46:12.285427    7741 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0601 03:46:12.285459    7741 cache_images.go:123] Successfully loaded all cached images
	I0601 03:46:12.285464    7741 cache_images.go:92] LoadImages completed in 12.042976162s
	I0601 03:46:12.285570    7741 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 03:46:12.359778    7741 cni.go:95] Creating CNI manager for ""
	I0601 03:46:12.359806    7741 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 03:46:12.359819    7741 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 03:46:12.359836    7741 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20220601034553-2342 NodeName:test-preload-20220601034553-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 03:46:12.359956    7741 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "test-preload-20220601034553-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 03:46:12.360039    7741 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=test-preload-20220601034553-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220601034553-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 03:46:12.360094    7741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I0601 03:46:12.368026    7741 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.0': No such file or directory
	
	Initiating transfer...
	I0601 03:46:12.368101    7741 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I0601 03:46:12.375275    7741 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.17.0/kubectl
	I0601 03:46:12.375279    7741 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.17.0/kubelet
	I0601 03:46:12.375297    7741 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.17.0/kubeadm
	I0601 03:46:12.907319    7741 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I0601 03:46:12.913155    7741 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubeadm': No such file or directory
	I0601 03:46:12.913196    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I0601 03:46:12.935002    7741 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I0601 03:46:13.004816    7741 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I0601 03:46:13.004862    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I0601 03:46:13.313584    7741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 03:46:13.385169    7741 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I0601 03:46:13.455617    7741 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubelet': No such file or directory
	I0601 03:46:13.455656    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I0601 03:46:16.117785    7741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 03:46:16.125093    7741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0601 03:46:16.138969    7741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 03:46:16.151950    7741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0601 03:46:16.164412    7741 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 03:46:16.168075    7741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 03:46:16.177419    7741 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342 for IP: 192.168.49.2
	I0601 03:46:16.177539    7741 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 03:46:16.177587    7741 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 03:46:16.177626    7741 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/client.key
	I0601 03:46:16.177637    7741 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/client.crt with IP's: []
	I0601 03:46:16.233911    7741 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/client.crt ...
	I0601 03:46:16.233921    7741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/client.crt: {Name:mk9721751d42c5a16d014ad5e3d031188b4fed38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:46:16.234279    7741 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/client.key ...
	I0601 03:46:16.234288    7741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/client.key: {Name:mkee4a79d493160674c5fad89f7a53d2bf06bd86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:46:16.234510    7741 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/apiserver.key.dd3b5fb2
	I0601 03:46:16.234528    7741 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 03:46:16.277222    7741 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/apiserver.crt.dd3b5fb2 ...
	I0601 03:46:16.277232    7741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/apiserver.crt.dd3b5fb2: {Name:mk214257279186b59d2b6e8728d8e885fd4c012b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:46:16.277480    7741 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/apiserver.key.dd3b5fb2 ...
	I0601 03:46:16.277488    7741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/apiserver.key.dd3b5fb2: {Name:mk7e82c9858362f094c2a4a3b910abc5186711f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:46:16.277693    7741 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/apiserver.crt
	I0601 03:46:16.277848    7741 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/apiserver.key
	I0601 03:46:16.278000    7741 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/proxy-client.key
	I0601 03:46:16.278016    7741 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/proxy-client.crt with IP's: []
	I0601 03:46:16.505237    7741 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/proxy-client.crt ...
	I0601 03:46:16.505253    7741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/proxy-client.crt: {Name:mk2efd4ba7c440617c53ddefe5582bbeecc2b439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:46:16.505544    7741 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/proxy-client.key ...
	I0601 03:46:16.505553    7741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/proxy-client.key: {Name:mka2d402a1c018c0fdf123433655c015f2b320df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:46:16.505939    7741 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 03:46:16.505982    7741 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 03:46:16.505996    7741 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 03:46:16.506029    7741 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 03:46:16.506062    7741 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 03:46:16.506091    7741 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 03:46:16.506195    7741 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 03:46:16.506728    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 03:46:16.525250    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 03:46:16.543058    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 03:46:16.560344    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601034553-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 03:46:16.577643    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 03:46:16.594666    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 03:46:16.611542    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 03:46:16.629120    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 03:46:16.646209    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 03:46:16.663423    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 03:46:16.680102    7741 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 03:46:16.697090    7741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 03:46:16.710224    7741 ssh_runner.go:195] Run: openssl version
	I0601 03:46:16.715657    7741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 03:46:16.723909    7741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 03:46:16.728133    7741 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 03:46:16.728178    7741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 03:46:16.733481    7741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 03:46:16.741640    7741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 03:46:16.749591    7741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 03:46:16.753894    7741 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 03:46:16.753933    7741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 03:46:16.759227    7741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 03:46:16.767149    7741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 03:46:16.775073    7741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 03:46:16.779332    7741 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 03:46:16.779381    7741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 03:46:16.785170    7741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 03:46:16.793258    7741 kubeadm.go:395] StartCluster: {Name:test-preload-20220601034553-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220601034553-2342 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false}
	I0601 03:46:16.793403    7741 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 03:46:16.822353    7741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 03:46:16.829966    7741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 03:46:16.838255    7741 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 03:46:16.838300    7741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 03:46:16.846091    7741 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 03:46:16.846115    7741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 03:46:17.565886    7741 out.go:204]   - Generating certificates and keys ...
	I0601 03:46:20.502983    7741 out.go:204]   - Booting up control plane ...
	W0601 03:48:15.423901    7741 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220601034553-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220601034553-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 10:46:16.911724    1458 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 10:46:16.911777    1458 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:46:20.488771    1458 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:46:20.489924    1458 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220601034553-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220601034553-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 10:46:16.911724    1458 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 10:46:16.911777    1458 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:46:20.488771    1458 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:46:20.489924    1458 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 03:48:15.423933    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 03:48:15.851576    7741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 03:48:15.861575    7741 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 03:48:15.861618    7741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 03:48:15.869069    7741 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 03:48:15.869089    7741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 03:48:16.572761    7741 out.go:204]   - Generating certificates and keys ...
	I0601 03:48:17.112138    7741 out.go:204]   - Booting up control plane ...
	I0601 03:50:12.032197    7741 kubeadm.go:397] StartCluster complete in 3m55.237317805s
	I0601 03:50:12.032286    7741 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 03:50:12.060887    7741 logs.go:274] 0 containers: []
	W0601 03:50:12.060899    7741 logs.go:276] No container was found matching "kube-apiserver"
	I0601 03:50:12.060951    7741 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 03:50:12.089663    7741 logs.go:274] 0 containers: []
	W0601 03:50:12.089675    7741 logs.go:276] No container was found matching "etcd"
	I0601 03:50:12.089734    7741 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 03:50:12.118206    7741 logs.go:274] 0 containers: []
	W0601 03:50:12.118219    7741 logs.go:276] No container was found matching "coredns"
	I0601 03:50:12.118273    7741 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 03:50:12.146196    7741 logs.go:274] 0 containers: []
	W0601 03:50:12.146208    7741 logs.go:276] No container was found matching "kube-scheduler"
	I0601 03:50:12.146261    7741 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 03:50:12.176042    7741 logs.go:274] 0 containers: []
	W0601 03:50:12.176055    7741 logs.go:276] No container was found matching "kube-proxy"
	I0601 03:50:12.176109    7741 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 03:50:12.204906    7741 logs.go:274] 0 containers: []
	W0601 03:50:12.204917    7741 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 03:50:12.204974    7741 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 03:50:12.233760    7741 logs.go:274] 0 containers: []
	W0601 03:50:12.233772    7741 logs.go:276] No container was found matching "storage-provisioner"
	I0601 03:50:12.233828    7741 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 03:50:12.261974    7741 logs.go:274] 0 containers: []
	W0601 03:50:12.261986    7741 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 03:50:12.262000    7741 logs.go:123] Gathering logs for container status ...
	I0601 03:50:12.262008    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 03:50:14.319116    7741 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057079049s)
	I0601 03:50:14.319262    7741 logs.go:123] Gathering logs for kubelet ...
	I0601 03:50:14.319271    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 03:50:14.358044    7741 logs.go:123] Gathering logs for dmesg ...
	I0601 03:50:14.358057    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 03:50:14.371416    7741 logs.go:123] Gathering logs for describe nodes ...
	I0601 03:50:14.371428    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 03:50:14.423938    7741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 03:50:14.423951    7741 logs.go:123] Gathering logs for Docker ...
	I0601 03:50:14.423959    7741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0601 03:50:14.437518    7741 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 10:48:15.931940    3745 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 10:48:15.931990    3745 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:48:17.094647    3745 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:48:17.095368    3745 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 03:50:14.437540    7741 out.go:239] * 
	* 
	W0601 03:50:14.437648    7741 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 10:48:15.931940    3745 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 10:48:15.931990    3745 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:48:17.094647    3745 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:48:17.095368    3745 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 10:48:15.931940    3745 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 10:48:15.931990    3745 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:48:17.094647    3745 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:48:17.095368    3745 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 03:50:14.437662    7741 out.go:239] * 
	* 
	W0601 03:50:14.438234    7741 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 03:50:14.502240    7741 out.go:177] 
	W0601 03:50:14.545458    7741 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 10:48:15.931940    3745 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 10:48:15.931990    3745 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:48:17.094647    3745 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:48:17.095368    3745 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 10:48:15.931940    3745 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 10:48:15.931990    3745 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 10:48:17.094647    3745 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 10:48:17.095368    3745 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 03:50:14.545611    7741 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 03:50:14.545723    7741 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 03:50:14.609034    7741 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-darwin-amd64 start -p test-preload-20220601034553-2342 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 109
panic.go:482: *** TestPreload FAILED at 2022-06-01 03:50:14.736327 -0700 PDT m=+1841.382610487
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220601034553-2342
helpers_test.go:235: (dbg) docker inspect test-preload-20220601034553-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d54bf3c5433c8b2ca8fbe6840a68ec8cd8cafb72e712940fee309e611227882",
	        "Created": "2022-06-01T10:45:55.75420578Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 91985,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T10:45:56.104025552Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/2d54bf3c5433c8b2ca8fbe6840a68ec8cd8cafb72e712940fee309e611227882/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d54bf3c5433c8b2ca8fbe6840a68ec8cd8cafb72e712940fee309e611227882/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d54bf3c5433c8b2ca8fbe6840a68ec8cd8cafb72e712940fee309e611227882/hosts",
	        "LogPath": "/var/lib/docker/containers/2d54bf3c5433c8b2ca8fbe6840a68ec8cd8cafb72e712940fee309e611227882/2d54bf3c5433c8b2ca8fbe6840a68ec8cd8cafb72e712940fee309e611227882-json.log",
	        "Name": "/test-preload-20220601034553-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20220601034553-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20220601034553-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/63e7f7cb3b9d430092bc6da2b6709c6836b62d1faf1a837db620d1471ee5f286-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63e7f7cb3b9d430092bc6da2b6709c6836b62d1faf1a837db620d1471ee5f286/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63e7f7cb3b9d430092bc6da2b6709c6836b62d1faf1a837db620d1471ee5f286/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63e7f7cb3b9d430092bc6da2b6709c6836b62d1faf1a837db620d1471ee5f286/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-20220601034553-2342",
	                "Source": "/var/lib/docker/volumes/test-preload-20220601034553-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20220601034553-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20220601034553-2342",
	                "name.minikube.sigs.k8s.io": "test-preload-20220601034553-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d6b26608cb84892b225f7f20ee0bb3dfa2ee4a3538427718820b1f7a2f74bf9f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58810"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58811"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58812"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d6b26608cb84",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20220601034553-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2d54bf3c5433",
	                        "test-preload-20220601034553-2342"
	                    ],
	                    "NetworkID": "c3d6611edd5ba14af99a0964fa7890263cc0741a6531b22608508a2f8c720fca",
	                    "EndpointID": "e0eaaa251af0d914f66f4e4c8a00f7d3caf512548f551638285eb540b346f8f3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220601034553-2342 -n test-preload-20220601034553-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220601034553-2342 -n test-preload-20220601034553-2342: exit status 6 (427.870271ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 03:50:15.235946    7904 status.go:413] kubeconfig endpoint: extract IP: "test-preload-20220601034553-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-20220601034553-2342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-20220601034553-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220601034553-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220601034553-2342: (2.536411003s)
--- FAIL: TestPreload (264.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (71.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2321562179.exe start -p running-upgrade-20220601035801-2342 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2321562179.exe start -p running-upgrade-20220601035801-2342 --memory=2200 --vm-driver=docker : exit status 70 (46.342015165s)

                                                
                                                
-- stdout --
	! [running-upgrade-20220601035801-2342] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1285380346
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 10:58:17.162007960 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-20220601035801-2342" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 10:58:45.753005979 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-20220601035801-2342", then "minikube start -p running-upgrade-20220601035801-2342 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.25.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.25.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 28.03 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 65.03 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 120.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 168.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 224.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 258.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 319.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 360.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 408.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 454.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 498.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 10:58:45.753005979 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2321562179.exe start -p running-upgrade-20220601035801-2342 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2321562179.exe start -p running-upgrade-20220601035801-2342 --memory=2200 --vm-driver=docker : exit status 70 (14.628765023s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220601035801-2342] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3425338498
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220601035801-2342" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2321562179.exe start -p running-upgrade-20220601035801-2342 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2321562179.exe start -p running-upgrade-20220601035801-2342 --memory=2200 --vm-driver=docker : exit status 70 (5.071727048s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220601035801-2342] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1308565547
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220601035801-2342" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-06-01 03:59:09.908379 -0700 PDT m=+2376.550040850
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220601035801-2342
helpers_test.go:235: (dbg) docker inspect running-upgrade-20220601035801-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f7dcba5bb46eb880c355eb0e66c8afc8615cbe91999d54d3a42233a7ffdbde3",
	        "Created": "2022-06-01T10:58:36.308636993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 127997,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T10:58:36.535082719Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/7f7dcba5bb46eb880c355eb0e66c8afc8615cbe91999d54d3a42233a7ffdbde3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f7dcba5bb46eb880c355eb0e66c8afc8615cbe91999d54d3a42233a7ffdbde3/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f7dcba5bb46eb880c355eb0e66c8afc8615cbe91999d54d3a42233a7ffdbde3/hosts",
	        "LogPath": "/var/lib/docker/containers/7f7dcba5bb46eb880c355eb0e66c8afc8615cbe91999d54d3a42233a7ffdbde3/7f7dcba5bb46eb880c355eb0e66c8afc8615cbe91999d54d3a42233a7ffdbde3-json.log",
	        "Name": "/running-upgrade-20220601035801-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20220601035801-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f48f130ab84337732404864832bf2b349562ee6870eeac051e5160b8c1fb47eb-init/diff:/var/lib/docker/overlay2/5a5021b04d40486c3f899d3d86469c69d0a0a3a6aedb4a262808e8e0e3212dd9/diff:/var/lib/docker/overlay2/34d2fad93be8a8b08db19932b165d6e4ee12c642f5b9a71ae0da16e41e895455/diff:/var/lib/docker/overlay2/a519d8b71fe163aad87235d12fd7596db7d55f7f2c546ea938ac5b44f16b652f/diff:/var/lib/docker/overlay2/2f15e48f7fd9f51c0246edf680b5bf5101d756e18f610fe615defe179c7ff534/diff:/var/lib/docker/overlay2/b3950a464734420ac98826fd7846d239d550db1d1ae773f32fd285af845cdf22/diff:/var/lib/docker/overlay2/8988ddfdbc34033c8f6dfbda80a939b635699c7799196fc6e1c67870aa3a98fe/diff:/var/lib/docker/overlay2/7ba0245eca92a262dcf5985ae53e44b4246b2148cf3041b19299c4824436c857/diff:/var/lib/docker/overlay2/6c8ceadb783c54050c9822b7a9c7e32f5c8c95922ec59c1027de2484daecd2b4/diff:/var/lib/docker/overlay2/35b8de062c6e2440d11c06c0221db2bc4763da7dcc75f1ff234a1a6620f908c0/diff:/var/lib/docker/overlay2/3584c2
bd1bdbc4f33ae8409b002bb9449ef69f5eac5efaf3029bafd8e59e616d/diff:/var/lib/docker/overlay2/89f35c1cfd5f4b4711c8faf3c75a939b4b42ad8280d52e46ed9174898ebd4dea/diff:/var/lib/docker/overlay2/ba52e45aa55684244ce68ffb6f37275e672a920729ea5be00e4cc02625a11336/diff:/var/lib/docker/overlay2/88f06922766e6932db8f1d9662f093b42c354676160da5d7d627df01138940d2/diff:/var/lib/docker/overlay2/e30f8690cf13147aeb6cc0f6af6a5cc429942a49d65fc69df4976e32002b2c9c/diff:/var/lib/docker/overlay2/a013d03dab2547e58c77f48109fc20ac70497dba6843d25ae3705c054244401e/diff:/var/lib/docker/overlay2/cdb70bf8140c088f0dea40152c2a2ce37a40912c2a58e90e93f143d49795084f/diff:/var/lib/docker/overlay2/65b836a39622281946b823eb252606e8e09382a0f51a3fd2000a31247d55db47/diff:/var/lib/docker/overlay2/ba32c157bb001a6bdee2dd25782f9072b8f2c1f17dd60711c5dc96767ca3633e/diff:/var/lib/docker/overlay2/ebafcf8827f052a7339d84dae13db8562e7c9ff8c83ab195475000d74a29cb36/diff:/var/lib/docker/overlay2/be3502d132a8b884468dd4a5bcd811e32bd090fb7b255d888e53c9d4014ba2e0/diff:/var/lib/d
ocker/overlay2/f3b71613f15fd8e9cf665f9751d01943a85c6e1f36bc8a4317db3788ca9a6d68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f48f130ab84337732404864832bf2b349562ee6870eeac051e5160b8c1fb47eb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f48f130ab84337732404864832bf2b349562ee6870eeac051e5160b8c1fb47eb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f48f130ab84337732404864832bf2b349562ee6870eeac051e5160b8c1fb47eb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220601035801-2342",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220601035801-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220601035801-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220601035801-2342",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220601035801-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d0b0da846c7245622616dda1a24211cfa27369d0eae00e33ba9febb59281bf97",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62406"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62407"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d0b0da846c72",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "1e6fecad153dec66546039e8e71f7a59960a148deac67f5b7a9b049c9497de8b",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "33a147822a1df2cb962e3f3391e1b8a7b8a9daf43edd77b0aa62a42bb8a73f1c",
	                    "EndpointID": "1e6fecad153dec66546039e8e71f7a59960a148deac67f5b7a9b049c9497de8b",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220601035801-2342 -n running-upgrade-20220601035801-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220601035801-2342 -n running-upgrade-20220601035801-2342: exit status 6 (472.744987ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 03:59:10.448664   10336 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20220601035801-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-20220601035801-2342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-20220601035801-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220601035801-2342

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220601035801-2342: (2.522466055s)
--- FAIL: TestRunningBinaryUpgrade (71.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (314.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601035912-2342 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601035912-2342 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m14.283682746s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220601035912-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubernetes-upgrade-20220601035912-2342 in cluster kubernetes-upgrade-20220601035912-2342
	* Pulling base image ...
	* Downloading Kubernetes v1.16.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 03:59:13.036571   10396 out.go:296] Setting OutFile to fd 1 ...
	I0601 03:59:13.036812   10396 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:59:13.036817   10396 out.go:309] Setting ErrFile to fd 2...
	I0601 03:59:13.036823   10396 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:59:13.036932   10396 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 03:59:13.037248   10396 out.go:303] Setting JSON to false
	I0601 03:59:13.052950   10396 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3523,"bootTime":1654077630,"procs":346,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 03:59:13.053076   10396 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 03:59:13.075671   10396 out.go:177] * [kubernetes-upgrade-20220601035912-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 03:59:13.097389   10396 notify.go:193] Checking for updates...
	I0601 03:59:13.119031   10396 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 03:59:13.141170   10396 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 03:59:13.163400   10396 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 03:59:13.185249   10396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 03:59:13.207380   10396 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 03:59:13.229841   10396 config.go:178] Loaded profile config "missing-upgrade-20220601035819-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0601 03:59:13.229924   10396 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 03:59:13.303082   10396 docker.go:137] docker version: linux-20.10.14
	I0601 03:59:13.303218   10396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 03:59:13.432106   10396 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 10:59:13.38134151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 03:59:13.475571   10396 out.go:177] * Using the docker driver based on user configuration
	I0601 03:59:13.497600   10396 start.go:284] selected driver: docker
	I0601 03:59:13.497621   10396 start.go:806] validating driver "docker" against <nil>
	I0601 03:59:13.497660   10396 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 03:59:13.501077   10396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 03:59:13.652440   10396 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 10:59:13.580967304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 03:59:13.652587   10396 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 03:59:13.652732   10396 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 03:59:13.674347   10396 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 03:59:13.695190   10396 cni.go:95] Creating CNI manager for ""
	I0601 03:59:13.695207   10396 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 03:59:13.695214   10396 start_flags.go:306] config:
	{Name:kubernetes-upgrade-20220601035912-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220601035912-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 03:59:13.738014   10396 out.go:177] * Starting control plane node kubernetes-upgrade-20220601035912-2342 in cluster kubernetes-upgrade-20220601035912-2342
	I0601 03:59:13.759530   10396 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 03:59:13.781487   10396 out.go:177] * Pulling base image ...
	I0601 03:59:13.824471   10396 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 03:59:13.824485   10396 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 03:59:13.894141   10396 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 03:59:13.894164   10396 cache.go:57] Caching tarball of preloaded images
	I0601 03:59:13.894400   10396 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 03:59:13.894547   10396 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 03:59:13.916173   10396 out.go:177] * Downloading Kubernetes v1.16.0 preload ...
	I0601 03:59:13.916195   10396 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 03:59:13.957937   10396 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0601 03:59:14.061001   10396 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 03:59:16.345384   10396 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0601 03:59:16.345528   10396 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0601 03:59:16.894543   10396 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 03:59:16.894628   10396 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/config.json ...
	I0601 03:59:16.894651   10396 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/config.json: {Name:mkd97f78dcb2d8691af0e465946dd47e2670c0f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:59:16.894900   10396 cache.go:206] Successfully downloaded all kic artifacts
	I0601 03:59:16.894930   10396 start.go:352] acquiring machines lock for kubernetes-upgrade-20220601035912-2342: {Name:mkffe554b00abb9e85d2be7f80ec9bc94c71958c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:59:16.895019   10396 start.go:356] acquired machines lock for "kubernetes-upgrade-20220601035912-2342" in 81.044µs
	I0601 03:59:16.895041   10396 start.go:91] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220601035912-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220601035912
-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Po
rt:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 03:59:16.895087   10396 start.go:131] createHost starting for "" (driver="docker")
	I0601 03:59:16.923399   10396 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 03:59:16.923621   10396 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220601035912-2342" (driver="docker")
	I0601 03:59:16.923645   10396 client.go:168] LocalClient.Create starting
	I0601 03:59:16.923728   10396 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 03:59:16.923764   10396 main.go:134] libmachine: Decoding PEM data...
	I0601 03:59:16.923777   10396 main.go:134] libmachine: Parsing certificate...
	I0601 03:59:16.923831   10396 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 03:59:16.923857   10396 main.go:134] libmachine: Decoding PEM data...
	I0601 03:59:16.923866   10396 main.go:134] libmachine: Parsing certificate...
	I0601 03:59:16.943839   10396 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220601035912-2342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 03:59:17.012332   10396 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220601035912-2342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 03:59:17.012447   10396 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220601035912-2342] to gather additional debugging logs...
	I0601 03:59:17.012472   10396 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220601035912-2342
	W0601 03:59:17.076870   10396 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220601035912-2342 returned with exit code 1
	I0601 03:59:17.076908   10396 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220601035912-2342]: docker network inspect kubernetes-upgrade-20220601035912-2342: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220601035912-2342
	I0601 03:59:17.076951   10396 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220601035912-2342]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220601035912-2342
	
	** /stderr **
	I0601 03:59:17.077026   10396 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 03:59:17.142467   10396 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005922e8] misses:0}
	I0601 03:59:17.142507   10396 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 03:59:17.142521   10396 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220601035912-2342 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 03:59:17.142591   10396 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601035912-2342
	I0601 03:59:17.242135   10396 network_create.go:99] docker network kubernetes-upgrade-20220601035912-2342 192.168.49.0/24 created
	I0601 03:59:17.242189   10396 kic.go:106] calculated static IP "192.168.49.2" for the "kubernetes-upgrade-20220601035912-2342" container
	I0601 03:59:17.242296   10396 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 03:59:17.309318   10396 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220601035912-2342 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601035912-2342 --label created_by.minikube.sigs.k8s.io=true
	I0601 03:59:17.375317   10396 oci.go:103] Successfully created a docker volume kubernetes-upgrade-20220601035912-2342
	I0601 03:59:17.375479   10396 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-20220601035912-2342-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601035912-2342 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220601035912-2342:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 03:59:17.851552   10396 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-20220601035912-2342
	I0601 03:59:17.851605   10396 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 03:59:17.851620   10396 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 03:59:17.851726   10396 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220601035912-2342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 03:59:21.811000   10396 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220601035912-2342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (3.959151209s)
	I0601 03:59:21.811020   10396 kic.go:188] duration metric: took 3.959373 seconds to extract preloaded images to volume
	I0601 03:59:21.811130   10396 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 03:59:21.947905   10396 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220601035912-2342 --name kubernetes-upgrade-20220601035912-2342 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601035912-2342 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220601035912-2342 --network kubernetes-upgrade-20220601035912-2342 --ip 192.168.49.2 --volume kubernetes-upgrade-20220601035912-2342:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 03:59:22.471814   10396 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601035912-2342 --format={{.State.Running}}
	I0601 03:59:22.554376   10396 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601035912-2342 --format={{.State.Status}}
	I0601 03:59:22.642731   10396 cli_runner.go:164] Run: docker exec kubernetes-upgrade-20220601035912-2342 stat /var/lib/dpkg/alternatives/iptables
	I0601 03:59:22.804592   10396 oci.go:247] the created container "kubernetes-upgrade-20220601035912-2342" has a running status.
	I0601 03:59:22.804637   10396 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa...
	I0601 03:59:23.000775   10396 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 03:59:23.146616   10396 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601035912-2342 --format={{.State.Status}}
	I0601 03:59:23.231785   10396 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 03:59:23.231805   10396 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220601035912-2342 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 03:59:23.391133   10396 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601035912-2342 --format={{.State.Status}}
	I0601 03:59:23.472116   10396 machine.go:88] provisioning docker machine ...
	I0601 03:59:23.472154   10396 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220601035912-2342"
	I0601 03:59:23.472238   10396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:23.556465   10396 main.go:134] libmachine: Using SSH client type: native
	I0601 03:59:23.556668   10396 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63201 <nil> <nil>}
	I0601 03:59:23.556680   10396 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220601035912-2342 && echo "kubernetes-upgrade-20220601035912-2342" | sudo tee /etc/hostname
	I0601 03:59:23.689838   10396 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220601035912-2342
	
	I0601 03:59:23.689919   10396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:23.773838   10396 main.go:134] libmachine: Using SSH client type: native
	I0601 03:59:23.774033   10396 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63201 <nil> <nil>}
	I0601 03:59:23.774055   10396 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220601035912-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220601035912-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220601035912-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 03:59:23.902655   10396 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 03:59:23.902688   10396 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 03:59:23.902744   10396 ubuntu.go:177] setting up certificates
	I0601 03:59:23.902761   10396 provision.go:83] configureAuth start
	I0601 03:59:23.902857   10396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:23.987508   10396 provision.go:138] copyHostCerts
	I0601 03:59:23.987634   10396 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 03:59:23.987647   10396 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 03:59:23.987820   10396 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 03:59:23.988066   10396 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 03:59:23.988080   10396 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 03:59:23.988153   10396 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 03:59:23.988363   10396 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 03:59:23.988373   10396 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 03:59:23.988458   10396 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 03:59:23.988641   10396 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220601035912-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220601035912-2342]
	I0601 03:59:24.219004   10396 provision.go:172] copyRemoteCerts
	I0601 03:59:24.219084   10396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 03:59:24.219147   10396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:24.310704   10396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa Username:docker}
	I0601 03:59:24.398496   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 03:59:24.425791   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0601 03:59:24.454295   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 03:59:24.482812   10396 provision.go:86] duration metric: configureAuth took 580.027714ms
	I0601 03:59:24.482825   10396 ubuntu.go:193] setting minikube options for container-runtime
	I0601 03:59:24.482977   10396 config.go:178] Loaded profile config "kubernetes-upgrade-20220601035912-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 03:59:24.483031   10396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:24.569206   10396 main.go:134] libmachine: Using SSH client type: native
	I0601 03:59:24.569556   10396 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63201 <nil> <nil>}
	I0601 03:59:24.569572   10396 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 03:59:24.693121   10396 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 03:59:24.693135   10396 ubuntu.go:71] root file system type: overlay
	I0601 03:59:24.693354   10396 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 03:59:24.693450   10396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:24.785177   10396 main.go:134] libmachine: Using SSH client type: native
	I0601 03:59:24.785374   10396 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63201 <nil> <nil>}
	I0601 03:59:24.785423   10396 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 03:59:24.922559   10396 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 03:59:24.922659   10396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:25.007340   10396 main.go:134] libmachine: Using SSH client type: native
	I0601 03:59:25.007709   10396 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63201 <nil> <nil>}
	I0601 03:59:25.007732   10396 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 03:59:26.341097   10396 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 10:59:24.926125420 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0601 03:59:26.341123   10396 machine.go:91] provisioned docker machine in 2.868966562s
	I0601 03:59:26.341130   10396 client.go:171] LocalClient.Create took 9.417415896s
	I0601 03:59:26.341153   10396 start.go:173] duration metric: libmachine.API.Create for "kubernetes-upgrade-20220601035912-2342" took 9.41746236s
	I0601 03:59:26.341169   10396 start.go:306] post-start starting for "kubernetes-upgrade-20220601035912-2342" (driver="docker")
	I0601 03:59:26.341175   10396 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 03:59:26.341238   10396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 03:59:26.341287   10396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:26.426672   10396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa Username:docker}
	I0601 03:59:26.521622   10396 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 03:59:26.527708   10396 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 03:59:26.527728   10396 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 03:59:26.527737   10396 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 03:59:26.527742   10396 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 03:59:26.527753   10396 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 03:59:26.527902   10396 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 03:59:26.528129   10396 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 03:59:26.528324   10396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 03:59:26.539460   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 03:59:26.562531   10396 start.go:309] post-start completed in 221.347795ms
	I0601 03:59:26.563345   10396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:26.645436   10396 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/config.json ...
	I0601 03:59:26.645892   10396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 03:59:26.645944   10396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:26.727962   10396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa Username:docker}
	I0601 03:59:26.813304   10396 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 03:59:26.819513   10396 start.go:134] duration metric: createHost completed in 9.924345037s
	I0601 03:59:26.819538   10396 start.go:81] releasing machines lock for "kubernetes-upgrade-20220601035912-2342", held for 9.924439295s
	I0601 03:59:26.819630   10396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:26.902480   10396 ssh_runner.go:195] Run: systemctl --version
	I0601 03:59:26.902492   10396 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 03:59:26.902578   10396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:26.902611   10396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:26.996255   10396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa Username:docker}
	I0601 03:59:26.999974   10396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa Username:docker}
	I0601 03:59:27.091730   10396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 03:59:27.232653   10396 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 03:59:27.253441   10396 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 03:59:27.253550   10396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 03:59:27.269902   10396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 03:59:27.289876   10396 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 03:59:27.389305   10396 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 03:59:27.477312   10396 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 03:59:27.490735   10396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 03:59:27.583569   10396 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 03:59:27.594181   10396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 03:59:27.637178   10396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 03:59:27.723765   10396 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0601 03:59:27.723876   10396 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220601035912-2342 dig +short host.docker.internal
	I0601 03:59:27.877505   10396 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 03:59:27.877615   10396 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 03:59:27.882540   10396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 03:59:27.896232   10396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 03:59:27.982328   10396 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 03:59:27.982401   10396 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 03:59:28.018812   10396 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 03:59:28.018833   10396 docker.go:541] Images already preloaded, skipping extraction
	I0601 03:59:28.018917   10396 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 03:59:28.056956   10396 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 03:59:28.058722   10396 cache_images.go:84] Images are preloaded, skipping loading
	I0601 03:59:28.058799   10396 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 03:59:28.149180   10396 cni.go:95] Creating CNI manager for ""
	I0601 03:59:28.149194   10396 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 03:59:28.149208   10396 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 03:59:28.149237   10396 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220601035912-2342 NodeName:kubernetes-upgrade-20220601035912-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 03:59:28.149402   10396 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220601035912-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-20220601035912-2342
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 03:59:28.149497   10396 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220601035912-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220601035912-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 03:59:28.149568   10396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0601 03:59:28.159694   10396 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 03:59:28.159772   10396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 03:59:28.174464   10396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I0601 03:59:28.192914   10396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 03:59:28.216767   10396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0601 03:59:28.234589   10396 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 03:59:28.238925   10396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 03:59:28.309860   10396 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342 for IP: 192.168.49.2
	I0601 03:59:28.309995   10396 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 03:59:28.310045   10396 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 03:59:28.310086   10396 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/client.key
	I0601 03:59:28.310098   10396 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/client.crt with IP's: []
	I0601 03:59:28.378398   10396 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/client.crt ...
	I0601 03:59:28.378421   10396 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/client.crt: {Name:mk77bf1d316f900bcc5947abfdb279b8220195a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:59:28.378732   10396 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/client.key ...
	I0601 03:59:28.378741   10396 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/client.key: {Name:mk168b8e1148cb71c0604d43045c90c2fee95dfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:59:28.379001   10396 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.key.dd3b5fb2
	I0601 03:59:28.379021   10396 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 03:59:28.471610   10396 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.crt.dd3b5fb2 ...
	I0601 03:59:28.471621   10396 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.crt.dd3b5fb2: {Name:mk78a49c5ab67803aa5c1aa1752d5692f28fb3dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:59:28.471851   10396 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.key.dd3b5fb2 ...
	I0601 03:59:28.471860   10396 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.key.dd3b5fb2: {Name:mk57d631716314844080124066bd507af716cf3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:59:28.472066   10396 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.crt
	I0601 03:59:28.472235   10396 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.key
	I0601 03:59:28.472409   10396 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/proxy-client.key
	I0601 03:59:28.472425   10396 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/proxy-client.crt with IP's: []
	I0601 03:59:28.682163   10396 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/proxy-client.crt ...
	I0601 03:59:28.682178   10396 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/proxy-client.crt: {Name:mk7ff5c19956549a46df3a2f7fb195b20ae63119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:59:28.682486   10396 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/proxy-client.key ...
	I0601 03:59:28.682497   10396 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/proxy-client.key: {Name:mk25b6d25379815c17740dbd6ce44da13062b47d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:59:28.682929   10396 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 03:59:28.682980   10396 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 03:59:28.682991   10396 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 03:59:28.683023   10396 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 03:59:28.683057   10396 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 03:59:28.683093   10396 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 03:59:28.683160   10396 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 03:59:28.683767   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 03:59:28.702561   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 03:59:28.722109   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 03:59:28.739754   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 03:59:28.758441   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 03:59:28.776097   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 03:59:28.794263   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 03:59:28.815235   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 03:59:28.834123   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 03:59:28.852993   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 03:59:28.873390   10396 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 03:59:28.892698   10396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 03:59:28.905978   10396 ssh_runner.go:195] Run: openssl version
	I0601 03:59:28.912510   10396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 03:59:28.920869   10396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 03:59:28.924941   10396 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 03:59:28.924983   10396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 03:59:28.930324   10396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 03:59:28.938554   10396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 03:59:28.946134   10396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 03:59:28.950014   10396 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 03:59:28.950068   10396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 03:59:28.955193   10396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 03:59:28.962837   10396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 03:59:28.971109   10396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 03:59:28.974981   10396 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 03:59:28.975025   10396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 03:59:28.980045   10396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 03:59:28.990017   10396 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220601035912-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220601035912-2342 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 03:59:28.990113   10396 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 03:59:29.021878   10396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 03:59:29.030154   10396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 03:59:29.038200   10396 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 03:59:29.038258   10396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 03:59:29.046925   10396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 03:59:29.046955   10396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 03:59:29.865356   10396 out.go:204]   - Generating certificates and keys ...
	I0601 03:59:32.461162   10396 out.go:204]   - Booting up control plane ...
	W0601 04:01:27.376325   10396 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220601035912-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220601035912-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220601035912-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220601035912-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 04:01:27.376368   10396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:01:27.818088   10396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:01:27.829667   10396 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:01:27.829725   10396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:01:27.840744   10396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:01:27.840771   10396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:01:28.668091   10396 out.go:204]   - Generating certificates and keys ...
	I0601 04:01:29.699917   10396 out.go:204]   - Booting up control plane ...
	I0601 04:03:24.603628   10396 kubeadm.go:397] StartCluster complete in 3m55.611986741s
	I0601 04:03:24.603775   10396 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:03:24.637729   10396 logs.go:274] 0 containers: []
	W0601 04:03:24.637741   10396 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:03:24.637790   10396 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:03:24.674159   10396 logs.go:274] 0 containers: []
	W0601 04:03:24.674176   10396 logs.go:276] No container was found matching "etcd"
	I0601 04:03:24.674249   10396 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:03:24.708006   10396 logs.go:274] 0 containers: []
	W0601 04:03:24.708019   10396 logs.go:276] No container was found matching "coredns"
	I0601 04:03:24.708083   10396 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:03:24.740953   10396 logs.go:274] 0 containers: []
	W0601 04:03:24.740966   10396 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:03:24.741024   10396 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:03:24.772156   10396 logs.go:274] 0 containers: []
	W0601 04:03:24.772168   10396 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:03:24.772236   10396 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:03:24.804779   10396 logs.go:274] 0 containers: []
	W0601 04:03:24.804797   10396 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:03:24.804876   10396 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:03:24.834929   10396 logs.go:274] 0 containers: []
	W0601 04:03:24.834941   10396 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:03:24.834996   10396 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:03:24.867890   10396 logs.go:274] 0 containers: []
	W0601 04:03:24.867903   10396 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:03:24.867928   10396 logs.go:123] Gathering logs for kubelet ...
	I0601 04:03:24.867962   10396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:03:24.916408   10396 logs.go:123] Gathering logs for dmesg ...
	I0601 04:03:24.916424   10396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:03:24.931465   10396 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:03:24.931477   10396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:03:24.992342   10396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:03:24.992370   10396 logs.go:123] Gathering logs for Docker ...
	I0601 04:03:24.992396   10396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:03:25.006922   10396 logs.go:123] Gathering logs for container status ...
	I0601 04:03:25.006934   10396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:03:27.066501   10396 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05954059s)
	W0601 04:03:27.066686   10396 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 04:03:27.066704   10396 out.go:239] * 
	* 
	W0601 04:03:27.066853   10396 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 04:03:27.066870   10396 out.go:239] * 
	* 
	W0601 04:03:27.067451   10396 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 04:03:27.130577   10396 out.go:177] 
	W0601 04:03:27.172896   10396 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 04:03:27.173052   10396 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 04:03:27.173135   10396 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 04:03:27.215677   10396 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601035912-2342 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220601035912-2342

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220601035912-2342: (1.772983289s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220601035912-2342 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220601035912-2342 status --format={{.Host}}: exit status 7 (128.627694ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601035912-2342 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601035912-2342 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker : (33.219900966s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220601035912-2342 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601035912-2342 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601035912-2342 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (722.402716ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220601035912-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220601035912-2342
	    minikube start -p kubernetes-upgrade-20220601035912-2342 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220601035912-23422 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6, by running:
	    
	    minikube start -p kubernetes-upgrade-20220601035912-2342 --kubernetes-version=v1.23.6
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601035912-2342 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601035912-2342 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker : (14.510962475s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-06-01 04:04:17.77636 -0700 PDT m=+2684.415896612
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220601035912-2342
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220601035912-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c93ccb2e6e6e5dfb478d20b711f0e767ded7e71b87e994e1dfe2b41bc4f798c7",
	        "Created": "2022-06-01T10:59:22.020377767Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 146935,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:03:30.821140257Z",
	            "FinishedAt": "2022-06-01T11:03:27.839627534Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/c93ccb2e6e6e5dfb478d20b711f0e767ded7e71b87e994e1dfe2b41bc4f798c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c93ccb2e6e6e5dfb478d20b711f0e767ded7e71b87e994e1dfe2b41bc4f798c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/c93ccb2e6e6e5dfb478d20b711f0e767ded7e71b87e994e1dfe2b41bc4f798c7/hosts",
	        "LogPath": "/var/lib/docker/containers/c93ccb2e6e6e5dfb478d20b711f0e767ded7e71b87e994e1dfe2b41bc4f798c7/c93ccb2e6e6e5dfb478d20b711f0e767ded7e71b87e994e1dfe2b41bc4f798c7-json.log",
	        "Name": "/kubernetes-upgrade-20220601035912-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-20220601035912-2342:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220601035912-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/42c9c669c2736108b29e0d22a754461155e542eed19c011ee59732f25a5ccff8-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42c9c669c2736108b29e0d22a754461155e542eed19c011ee59732f25a5ccff8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42c9c669c2736108b29e0d22a754461155e542eed19c011ee59732f25a5ccff8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42c9c669c2736108b29e0d22a754461155e542eed19c011ee59732f25a5ccff8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220601035912-2342",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220601035912-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220601035912-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220601035912-2342",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220601035912-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2e9a284a6038388d0b32891e5c7017f09b7b93abbd454a72b860c30315c1d6e9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64867"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64868"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64869"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64870"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64871"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2e9a284a6038",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220601035912-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c93ccb2e6e6e",
	                        "kubernetes-upgrade-20220601035912-2342"
	                    ],
	                    "NetworkID": "729d971b51ee93856634848de45c64e2c04a58bccfed765be4c82deeeaf59fab",
	                    "EndpointID": "befd52d398ada53182a6ce2de0f2f5f27a0ed7dd0cb88374e5b8ab1535f32d30",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220601035912-2342 -n kubernetes-upgrade-20220601035912-2342
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220601035912-2342 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220601035912-2342 logs -n 25: (3.235536871s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |                Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p                                     | cert-options-20220601035748-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:57 PDT | 01 Jun 22 03:58 PDT |
	|         | cert-options-20220601035748-2342       |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                                        |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                                        |         |                |                     |                     |
	|         | --apiserver-names=localhost            |                                        |         |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                                        |         |                |                     |                     |
	|         | --apiserver-port=8555                  |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	|         | --apiserver-name=localhost             |                                        |         |                |                     |                     |
	| ssh     | cert-options-20220601035748-2342       | cert-options-20220601035748-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:58 PDT | 01 Jun 22 03:58 PDT |
	|         | ssh openssl x509 -text -noout -in      |                                        |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                                        |         |                |                     |                     |
	| ssh     | -p                                     | cert-options-20220601035748-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:58 PDT | 01 Jun 22 03:58 PDT |
	|         | cert-options-20220601035748-2342       |                                        |         |                |                     |                     |
	|         | -- sudo cat                            |                                        |         |                |                     |                     |
	|         | /etc/kubernetes/admin.conf             |                                        |         |                |                     |                     |
	| delete  | -p                                     | cert-options-20220601035748-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:58 PDT | 01 Jun 22 03:58 PDT |
	|         | cert-options-20220601035748-2342       |                                        |         |                |                     |                     |
	| delete  | -p                                     | running-upgrade-20220601035801-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:59 PDT | 01 Jun 22 03:59 PDT |
	|         | running-upgrade-20220601035801-2342    |                                        |         |                |                     |                     |
	| delete  | -p                                     | missing-upgrade-20220601035819-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:59 PDT | 01 Jun 22 03:59 PDT |
	|         | missing-upgrade-20220601035819-2342    |                                        |         |                |                     |                     |
	| logs    | -p                                     | stopped-upgrade-20220601035914-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:00 PDT | 01 Jun 22 04:00 PDT |
	|         | stopped-upgrade-20220601035914-2342    |                                        |         |                |                     |                     |
	| delete  | -p                                     | stopped-upgrade-20220601035914-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:00 PDT | 01 Jun 22 04:00 PDT |
	|         | stopped-upgrade-20220601035914-2342    |                                        |         |                |                     |                     |
	| start   | -p pause-20220601040007-2342           | pause-20220601040007-2342              | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:00 PDT | 01 Jun 22 04:01 PDT |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --install-addons=false                 |                                        |         |                |                     |                     |
	|         | --wait=all --driver=docker             |                                        |         |                |                     |                     |
	| start   | -p pause-20220601040007-2342           | pause-20220601040007-2342              | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:01 PDT | 01 Jun 22 04:01 PDT |
	|         | --alsologtostderr -v=1                 |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| pause   | -p pause-20220601040007-2342           | pause-20220601040007-2342              | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:01 PDT | 01 Jun 22 04:01 PDT |
	|         | --alsologtostderr -v=5                 |                                        |         |                |                     |                     |
	| logs    | pause-20220601040007-2342 logs         | pause-20220601040007-2342              | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:02 PDT | 01 Jun 22 04:02 PDT |
	|         | -n 25                                  |                                        |         |                |                     |                     |
	| delete  | -p pause-20220601040007-2342           | pause-20220601040007-2342              | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:02 PDT | 01 Jun 22 04:02 PDT |
	| start   | -p                                     | NoKubernetes-20220601040237-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:02 PDT | 01 Jun 22 04:03 PDT |
	|         | NoKubernetes-20220601040237-2342       |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| start   | -p                                     | NoKubernetes-20220601040237-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:03 PDT | 01 Jun 22 04:03 PDT |
	|         | NoKubernetes-20220601040237-2342       |                                        |         |                |                     |                     |
	|         | --no-kubernetes --driver=docker        |                                        |         |                |                     |                     |
	| delete  | -p                                     | NoKubernetes-20220601040237-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:03 PDT | 01 Jun 22 04:03 PDT |
	|         | NoKubernetes-20220601040237-2342       |                                        |         |                |                     |                     |
	| start   | -p                                     | NoKubernetes-20220601040237-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:03 PDT | 01 Jun 22 04:03 PDT |
	|         | NoKubernetes-20220601040237-2342       |                                        |         |                |                     |                     |
	|         | --no-kubernetes --driver=docker        |                                        |         |                |                     |                     |
	| profile | list                                   | minikube                               | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:03 PDT | 01 Jun 22 04:03 PDT |
	| profile | list --output=json                     | minikube                               | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:03 PDT | 01 Jun 22 04:03 PDT |
	| stop    | -p                                     | kubernetes-upgrade-20220601035912-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:03 PDT | 01 Jun 22 04:03 PDT |
	|         | kubernetes-upgrade-20220601035912-2342 |                                        |         |                |                     |                     |
	| stop    | -p                                     | NoKubernetes-20220601040237-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:03 PDT | 01 Jun 22 04:03 PDT |
	|         | NoKubernetes-20220601040237-2342       |                                        |         |                |                     |                     |
	| start   | -p                                     | NoKubernetes-20220601040237-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:03 PDT | 01 Jun 22 04:03 PDT |
	|         | NoKubernetes-20220601040237-2342       |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| delete  | -p                                     | NoKubernetes-20220601040237-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:03 PDT | 01 Jun 22 04:03 PDT |
	|         | NoKubernetes-20220601040237-2342       |                                        |         |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220601035912-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:03 PDT | 01 Jun 22 04:04 PDT |
	|         | kubernetes-upgrade-20220601035912-2342 |                                        |         |                |                     |                     |
	|         | --memory=2200                          |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |                |                     |                     |
	|         |                                        |                                        |         |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220601035912-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:04 PDT | 01 Jun 22 04:04 PDT |
	|         | kubernetes-upgrade-20220601035912-2342 |                                        |         |                |                     |                     |
	|         | --memory=2200                          |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |                |                     |                     |
	|         |                                        |                                        |         |                |                     |                     |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:04:03
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:04:03.334736   11761 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:04:03.334918   11761 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:04:03.334924   11761 out.go:309] Setting ErrFile to fd 2...
	I0601 04:04:03.334928   11761 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:04:03.335096   11761 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:04:03.335355   11761 out.go:303] Setting JSON to false
	I0601 04:04:03.354663   11761 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3813,"bootTime":1654077630,"procs":349,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:04:03.354747   11761 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:04:03.376928   11761 out.go:177] * [kubernetes-upgrade-20220601035912-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:04:03.419424   11761 notify.go:193] Checking for updates...
	I0601 04:04:03.456421   11761 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:04:03.530300   11761 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:04:03.604405   11761 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:04:03.662356   11761 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:04:03.720240   11761 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:04:03.757727   11761 config.go:178] Loaded profile config "kubernetes-upgrade-20220601035912-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:04:03.758113   11761 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:04:03.850443   11761 docker.go:137] docker version: linux-20.10.14
	I0601 04:04:03.850606   11761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:04:04.000800   11761 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:56 SystemTime:2022-06-01 11:04:03.928389913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:04:04.024879   11761 out.go:177] * Using the docker driver based on existing profile
	I0601 04:04:04.062000   11761 start.go:284] selected driver: docker
	I0601 04:04:04.062017   11761 start.go:806] validating driver "docker" against &{Name:kubernetes-upgrade-20220601035912-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubernetes-upgrade-20220601035912-2
342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false}
	I0601 04:04:04.062096   11761 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:04:04.064810   11761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:04:04.209208   11761 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:56 SystemTime:2022-06-01 11:04:04.144813483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:04:04.209376   11761 cni.go:95] Creating CNI manager for ""
	I0601 04:04:04.209391   11761 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:04:04.209405   11761 start_flags.go:306] config:
	{Name:kubernetes-upgrade-20220601035912-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubernetes-upgrade-20220601035912-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:04:04.251890   11761 out.go:177] * Starting control plane node kubernetes-upgrade-20220601035912-2342 in cluster kubernetes-upgrade-20220601035912-2342
	I0601 04:04:04.272858   11761 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:04:04.310018   11761 out.go:177] * Pulling base image ...
	I0601 04:04:04.331021   11761 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:04:04.331042   11761 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:04:04.331067   11761 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 04:04:04.331082   11761 cache.go:57] Caching tarball of preloaded images
	I0601 04:04:04.331198   11761 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 04:04:04.331209   11761 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 04:04:04.331766   11761 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/config.json ...
	I0601 04:04:04.398568   11761 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:04:04.398586   11761 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:04:04.398598   11761 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:04:04.398656   11761 start.go:352] acquiring machines lock for kubernetes-upgrade-20220601035912-2342: {Name:mkffe554b00abb9e85d2be7f80ec9bc94c71958c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:04:04.398742   11761 start.go:356] acquired machines lock for "kubernetes-upgrade-20220601035912-2342" in 65.868µs
	I0601 04:04:04.398759   11761 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:04:04.398768   11761 fix.go:55] fixHost starting: 
	I0601 04:04:04.399010   11761 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601035912-2342 --format={{.State.Status}}
	I0601 04:04:04.472580   11761 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220601035912-2342: state=Running err=<nil>
	W0601 04:04:04.472613   11761 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:04:04.494034   11761 out.go:177] * Updating the running docker "kubernetes-upgrade-20220601035912-2342" container ...
	I0601 04:04:04.552092   11761 machine.go:88] provisioning docker machine ...
	I0601 04:04:04.552149   11761 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220601035912-2342"
	I0601 04:04:04.552267   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:04.626638   11761 main.go:134] libmachine: Using SSH client type: native
	I0601 04:04:04.626889   11761 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64867 <nil> <nil>}
	I0601 04:04:04.626907   11761 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220601035912-2342 && echo "kubernetes-upgrade-20220601035912-2342" | sudo tee /etc/hostname
	I0601 04:04:04.750532   11761 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220601035912-2342
	
	I0601 04:04:04.750616   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:04.827203   11761 main.go:134] libmachine: Using SSH client type: native
	I0601 04:04:04.827369   11761 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64867 <nil> <nil>}
	I0601 04:04:04.827386   11761 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220601035912-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220601035912-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220601035912-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:04:04.942455   11761 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:04:04.942474   11761 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:04:04.942494   11761 ubuntu.go:177] setting up certificates
	I0601 04:04:04.942505   11761 provision.go:83] configureAuth start
	I0601 04:04:04.942562   11761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:05.014622   11761 provision.go:138] copyHostCerts
	I0601 04:04:05.014713   11761 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:04:05.014723   11761 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:04:05.014829   11761 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:04:05.015045   11761 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:04:05.015055   11761 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:04:05.015116   11761 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:04:05.015252   11761 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:04:05.015258   11761 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:04:05.015318   11761 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:04:05.015476   11761 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220601035912-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220601035912-2342]
	I0601 04:04:05.204496   11761 provision.go:172] copyRemoteCerts
	I0601 04:04:05.204565   11761 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:04:05.204619   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:05.280123   11761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64867 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa Username:docker}
	I0601 04:04:05.366677   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:04:05.384328   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0601 04:04:05.401995   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 04:04:05.420792   11761 provision.go:86] duration metric: configureAuth took 478.269873ms
	I0601 04:04:05.420806   11761 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:04:05.420960   11761 config.go:178] Loaded profile config "kubernetes-upgrade-20220601035912-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:04:05.421010   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:05.493522   11761 main.go:134] libmachine: Using SSH client type: native
	I0601 04:04:05.493694   11761 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64867 <nil> <nil>}
	I0601 04:04:05.493704   11761 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:04:05.610469   11761 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:04:05.610480   11761 ubuntu.go:71] root file system type: overlay
	I0601 04:04:05.610600   11761 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:04:05.610667   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:05.686024   11761 main.go:134] libmachine: Using SSH client type: native
	I0601 04:04:05.686179   11761 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64867 <nil> <nil>}
	I0601 04:04:05.686234   11761 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:04:05.813039   11761 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:04:05.813126   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:05.886777   11761 main.go:134] libmachine: Using SSH client type: native
	I0601 04:04:05.886983   11761 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 64867 <nil> <nil>}
	I0601 04:04:05.886999   11761 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:04:06.009524   11761 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:04:06.009546   11761 machine.go:91] provisioned docker machine in 1.45742931s
	I0601 04:04:06.009554   11761 start.go:306] post-start starting for "kubernetes-upgrade-20220601035912-2342" (driver="docker")
	I0601 04:04:06.009558   11761 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:04:06.009631   11761 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:04:06.009681   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:06.083474   11761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64867 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa Username:docker}
	I0601 04:04:06.167990   11761 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:04:06.171532   11761 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:04:06.171547   11761 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:04:06.171555   11761 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:04:06.171561   11761 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:04:06.171569   11761 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:04:06.171685   11761 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:04:06.171824   11761 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:04:06.171967   11761 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:04:06.179044   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:04:06.197639   11761 start.go:309] post-start completed in 188.075313ms
	I0601 04:04:06.197712   11761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:04:06.197763   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:06.274701   11761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64867 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa Username:docker}
	I0601 04:04:06.361215   11761 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:04:06.365821   11761 fix.go:57] fixHost completed within 1.967037495s
	I0601 04:04:06.365836   11761 start.go:81] releasing machines lock for "kubernetes-upgrade-20220601035912-2342", held for 1.967071699s
	I0601 04:04:06.365922   11761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:06.438509   11761 ssh_runner.go:195] Run: systemctl --version
	I0601 04:04:06.438514   11761 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:04:06.438701   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:06.438687   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:06.519025   11761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64867 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa Username:docker}
	I0601 04:04:06.520888   11761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64867 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa Username:docker}
	I0601 04:04:06.602531   11761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:04:06.737475   11761 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:04:06.748764   11761 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:04:06.748883   11761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:04:06.759058   11761 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:04:06.772798   11761 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:04:06.860082   11761 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:04:06.963295   11761 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:04:06.974018   11761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:04:07.059843   11761 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:04:07.070057   11761 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:04:07.106037   11761 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:04:03.823131   11643 cni.go:95] Creating CNI manager for ""
	I0601 04:04:03.823149   11643 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:04:03.823190   11643 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 04:04:03.823262   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:03.823260   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=auto-20220601035306-2342 minikube.k8s.io/updated_at=2022_06_01T04_04_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:04.172386   11643 ops.go:34] apiserver oom_adj: -16
	I0601 04:04:04.172453   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:04.732228   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:05.231277   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:05.731306   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:06.232229   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:06.731264   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:07.231458   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:07.731237   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:07.163714   11761 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 04:04:07.163829   11761 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220601035912-2342 dig +short host.docker.internal
	I0601 04:04:07.304156   11761 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:04:07.304267   11761 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:04:07.309376   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:07.382433   11761 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:04:07.382509   11761 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:04:07.413205   11761 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0601 04:04:07.413220   11761 docker.go:541] Images already preloaded, skipping extraction
	I0601 04:04:07.413282   11761 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:04:07.444000   11761 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0601 04:04:07.444021   11761 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:04:07.444094   11761 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:04:07.517896   11761 cni.go:95] Creating CNI manager for ""
	I0601 04:04:07.517908   11761 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:04:07.517925   11761 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 04:04:07.517939   11761 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220601035912-2342 NodeName:kubernetes-upgrade-20220601035912-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:04:07.518064   11761 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220601035912-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:04:07.518144   11761 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220601035912-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:kubernetes-upgrade-20220601035912-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 04:04:07.518202   11761 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 04:04:07.527063   11761 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:04:07.527137   11761 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:04:07.535207   11761 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I0601 04:04:07.548939   11761 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:04:07.562647   11761 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2059 bytes)
	I0601 04:04:07.575905   11761 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:04:07.579894   11761 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342 for IP: 192.168.49.2
	I0601 04:04:07.580014   11761 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:04:07.580064   11761 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:04:07.580148   11761 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/client.key
	I0601 04:04:07.580224   11761 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.key.dd3b5fb2
	I0601 04:04:07.580291   11761 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/proxy-client.key
	I0601 04:04:07.580494   11761 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:04:07.580534   11761 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:04:07.580546   11761 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:04:07.580581   11761 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:04:07.580610   11761 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:04:07.580641   11761 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:04:07.580702   11761 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:04:07.581277   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:04:07.599088   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 04:04:07.617409   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:04:07.636037   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 04:04:07.654352   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:04:07.674351   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:04:07.692152   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:04:07.710349   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:04:07.728020   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:04:07.748437   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:04:07.768657   11761 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:04:07.789151   11761 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:04:07.804332   11761 ssh_runner.go:195] Run: openssl version
	I0601 04:04:07.810292   11761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:04:07.818429   11761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:04:07.822278   11761 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:04:07.822325   11761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:04:07.827780   11761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:04:07.835203   11761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:04:07.843405   11761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:04:07.847821   11761 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:04:07.847872   11761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:04:07.853411   11761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:04:07.861360   11761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:04:07.869058   11761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:04:07.872901   11761 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:04:07.872945   11761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:04:07.878179   11761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:04:07.885749   11761 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220601035912-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubernetes-upgrade-20220601035912-2342 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false}
	I0601 04:04:07.885860   11761 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:04:07.917481   11761 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:04:07.925954   11761 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:04:07.925972   11761 kubeadm.go:626] restartCluster start
	I0601 04:04:07.926022   11761 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:04:07.933698   11761 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:04:07.933766   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:08.006993   11761 kubeconfig.go:92] found "kubernetes-upgrade-20220601035912-2342" server: "https://127.0.0.1:64871"
	I0601 04:04:08.007441   11761 kapi.go:59] client config for kubernetes-upgrade-20220601035912-2342: &rest.Config{Host:"https://127.0.0.1:64871", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernete
s-upgrade-20220601035912-2342/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0601 04:04:08.007935   11761 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:04:08.015851   11761 api_server.go:165] Checking apiserver status ...
	I0601 04:04:08.015909   11761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:04:08.025123   11761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1460/cgroup
	W0601 04:04:08.034126   11761 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1460/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:04:08.034140   11761 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64871/healthz ...
	I0601 04:04:08.039484   11761 api_server.go:266] https://127.0.0.1:64871/healthz returned 200:
	ok
	I0601 04:04:08.050480   11761 system_pods.go:86] 5 kube-system pods found
	I0601 04:04:08.050536   11761 system_pods.go:89] "etcd-kubernetes-upgrade-20220601035912-2342" [8ed14826-d80a-4fff-8b08-45975cbb0ff7] Running
	I0601 04:04:08.050546   11761 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-20220601035912-2342" [d42c578f-6eec-46f1-9a2f-363f2ce281e3] Running
	I0601 04:04:08.050554   11761 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-20220601035912-2342" [a30c4324-3c26-415c-8001-1b776ad887b5] Running
	I0601 04:04:08.050561   11761 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-20220601035912-2342" [9a8809ec-ce4b-4556-a8fd-38a8dcca9df3] Running
	I0601 04:04:08.050572   11761 system_pods.go:89] "storage-provisioner" [d297fcf8-7fe0-41b6-b0e9-b26cfaad9f0c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 04:04:08.050581   11761 kubeadm.go:610] needs reconfigure: missing components: kube-dns, kube-proxy
	I0601 04:04:08.050607   11761 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:04:08.050699   11761 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:04:08.079729   11761 docker.go:442] Stopping containers: [50a9c54651ba 5c6bb0ecbdf9 e4d072f09c4b 97f6875d048a 9d92767c7168 c4840e38337f 710553a169c5 cfb051273270]
	I0601 04:04:08.079815   11761 ssh_runner.go:195] Run: docker stop 50a9c54651ba 5c6bb0ecbdf9 e4d072f09c4b 97f6875d048a 9d92767c7168 c4840e38337f 710553a169c5 cfb051273270
	I0601 04:04:08.231960   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:08.731608   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:09.231981   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:09.731889   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:10.231517   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:10.732239   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:11.231362   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:11.731469   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:12.231506   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:12.732265   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:09.318575   11761 ssh_runner.go:235] Completed: docker stop 50a9c54651ba 5c6bb0ecbdf9 e4d072f09c4b 97f6875d048a 9d92767c7168 c4840e38337f 710553a169c5 cfb051273270: (1.238730589s)
	I0601 04:04:09.318668   11761 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:04:09.409367   11761 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:04:09.417622   11761 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5759 Jun  1 11:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5791 Jun  1 11:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5951 Jun  1 11:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5739 Jun  1 11:01 /etc/kubernetes/scheduler.conf
	
	I0601 04:04:09.417676   11761 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 04:04:09.427962   11761 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 04:04:09.437060   11761 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 04:04:09.445588   11761 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 04:04:09.453094   11761 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:04:09.460869   11761 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:04:09.460882   11761 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:04:09.503960   11761 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:04:10.740919   11761 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.236917336s)
	I0601 04:04:10.740953   11761 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:04:10.881005   11761 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:04:10.927057   11761 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:04:10.990262   11761 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:04:10.990321   11761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:04:11.500605   11761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:04:12.000510   11761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:04:12.014479   11761 api_server.go:71] duration metric: took 1.024212786s to wait for apiserver process to appear ...
	I0601 04:04:12.014495   11761 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:04:12.014503   11761 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64871/healthz ...
	I0601 04:04:12.015685   11761 api_server.go:256] stopped: https://127.0.0.1:64871/healthz: Get "https://127.0.0.1:64871/healthz": EOF
	I0601 04:04:12.515954   11761 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64871/healthz ...
	I0601 04:04:14.805083   11761 api_server.go:266] https://127.0.0.1:64871/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 04:04:14.805108   11761 api_server.go:102] status: https://127.0.0.1:64871/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 04:04:15.015955   11761 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64871/healthz ...
	I0601 04:04:15.021891   11761 api_server.go:266] https://127.0.0.1:64871/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:04:15.021907   11761 api_server.go:102] status: https://127.0.0.1:64871/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:04:15.515796   11761 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64871/healthz ...
	I0601 04:04:15.523527   11761 api_server.go:266] https://127.0.0.1:64871/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:04:15.523547   11761 api_server.go:102] status: https://127.0.0.1:64871/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:04:16.015855   11761 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64871/healthz ...
	I0601 04:04:16.021340   11761 api_server.go:266] https://127.0.0.1:64871/healthz returned 200:
	ok
	I0601 04:04:16.027815   11761 api_server.go:140] control plane version: v1.23.6
	I0601 04:04:16.027828   11761 api_server.go:130] duration metric: took 4.013299929s to wait for apiserver health ...
	I0601 04:04:16.027833   11761 cni.go:95] Creating CNI manager for ""
	I0601 04:04:16.027840   11761 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:04:16.027848   11761 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:04:16.033222   11761 system_pods.go:59] 5 kube-system pods found
	I0601 04:04:16.033234   11761 system_pods.go:61] "etcd-kubernetes-upgrade-20220601035912-2342" [8ed14826-d80a-4fff-8b08-45975cbb0ff7] Running
	I0601 04:04:16.033238   11761 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220601035912-2342" [d42c578f-6eec-46f1-9a2f-363f2ce281e3] Running
	I0601 04:04:16.033247   11761 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220601035912-2342" [a30c4324-3c26-415c-8001-1b776ad887b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 04:04:16.033253   11761 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220601035912-2342" [9a8809ec-ce4b-4556-a8fd-38a8dcca9df3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 04:04:16.033261   11761 system_pods.go:61] "storage-provisioner" [d297fcf8-7fe0-41b6-b0e9-b26cfaad9f0c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 04:04:16.033267   11761 system_pods.go:74] duration metric: took 5.415531ms to wait for pod list to return data ...
	I0601 04:04:16.033275   11761 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:04:16.035911   11761 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:04:16.035926   11761 node_conditions.go:123] node cpu capacity is 6
	I0601 04:04:16.035937   11761 node_conditions.go:105] duration metric: took 2.658108ms to run NodePressure ...
	I0601 04:04:16.035949   11761 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:04:16.194021   11761 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 04:04:16.203278   11761 ops.go:34] apiserver oom_adj: -16
	I0601 04:04:16.203289   11761 kubeadm.go:630] restartCluster took 8.277255597s
	I0601 04:04:16.203297   11761 kubeadm.go:397] StartCluster complete in 8.317492429s
	I0601 04:04:16.203312   11761 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:04:16.203387   11761 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:04:16.204062   11761 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:04:16.204711   11761 kapi.go:59] client config for kubernetes-upgrade-20220601035912-2342: &rest.Config{Host:"https://127.0.0.1:64871", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernete
s-upgrade-20220601035912-2342/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0601 04:04:16.207748   11761 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20220601035912-2342" rescaled to 1
	I0601 04:04:16.207785   11761 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:04:16.207818   11761 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:04:16.207849   11761 addons.go:415] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0601 04:04:16.207932   11761 config.go:178] Loaded profile config "kubernetes-upgrade-20220601035912-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:04:16.284443   11761 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-20220601035912-2342"
	I0601 04:04:16.284266   11761 out.go:177] * Verifying Kubernetes components...
	I0601 04:04:16.284469   11761 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-20220601035912-2342"
	I0601 04:04:13.231592   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:13.731368   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:14.231725   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:14.731549   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:15.231444   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:15.731953   11643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:04:15.802930   11643 kubeadm.go:1045] duration metric: took 11.979653808s to wait for elevateKubeSystemPrivileges.
	I0601 04:04:15.802947   11643 kubeadm.go:397] StartCluster complete in 23.412679439s
	I0601 04:04:15.802968   11643 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:04:15.803051   11643 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:04:15.803835   11643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:04:16.325995   11643 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20220601035306-2342" rescaled to 1
	I0601 04:04:16.326065   11643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:04:16.326042   11643 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:04:16.326157   11643 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 04:04:16.326317   11643 config.go:178] Loaded profile config "auto-20220601035306-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:04:16.366292   11643 out.go:177] * Verifying Kubernetes components...
	I0601 04:04:16.284474   11761 addons.go:153] Setting addon storage-provisioner=true in "kubernetes-upgrade-20220601035912-2342"
	W0601 04:04:16.305267   11761 addons.go:165] addon storage-provisioner should already be in state true
	I0601 04:04:16.284510   11761 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20220601035912-2342"
	I0601 04:04:16.305306   11761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:04:16.305350   11761 host.go:66] Checking if "kubernetes-upgrade-20220601035912-2342" exists ...
	I0601 04:04:16.305560   11761 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601035912-2342 --format={{.State.Status}}
	I0601 04:04:16.305678   11761 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601035912-2342 --format={{.State.Status}}
	I0601 04:04:16.385731   11761 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 04:04:16.385750   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:16.461151   11761 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 04:04:16.366414   11643 addons.go:65] Setting storage-provisioner=true in profile "auto-20220601035306-2342"
	I0601 04:04:16.366425   11643 addons.go:65] Setting default-storageclass=true in profile "auto-20220601035306-2342"
	I0601 04:04:16.407325   11643 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20220601035306-2342"
	I0601 04:04:16.407340   11643 addons.go:153] Setting addon storage-provisioner=true in "auto-20220601035306-2342"
	W0601 04:04:16.407380   11643 addons.go:165] addon storage-provisioner should already be in state true
	I0601 04:04:16.407386   11643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:04:16.407435   11643 host.go:66] Checking if "auto-20220601035306-2342" exists ...
	I0601 04:04:16.407929   11643 cli_runner.go:164] Run: docker container inspect auto-20220601035306-2342 --format={{.State.Status}}
	I0601 04:04:16.407944   11643 cli_runner.go:164] Run: docker container inspect auto-20220601035306-2342 --format={{.State.Status}}
	I0601 04:04:16.422471   11643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 04:04:16.435486   11643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-20220601035306-2342
	I0601 04:04:16.573152   11643 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 04:04:16.489883   11761 kapi.go:59] client config for kubernetes-upgrade-20220601035912-2342: &rest.Config{Host:"https://127.0.0.1:64871", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601035912-2342/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernete
s-upgrade-20220601035912-2342/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0601 04:04:16.498348   11761 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:04:16.498370   11761 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:04:16.498458   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:16.513307   11761 addons.go:153] Setting addon default-storageclass=true in "kubernetes-upgrade-20220601035912-2342"
	W0601 04:04:16.513336   11761 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:04:16.513362   11761 host.go:66] Checking if "kubernetes-upgrade-20220601035912-2342" exists ...
	I0601 04:04:16.513795   11761 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601035912-2342 --format={{.State.Status}}
	I0601 04:04:16.538577   11761 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:04:16.538726   11761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:04:16.565183   11761 api_server.go:71] duration metric: took 357.37364ms to wait for apiserver process to appear ...
	I0601 04:04:16.565209   11761 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:04:16.565223   11761 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64871/healthz ...
	I0601 04:04:16.572267   11761 api_server.go:266] https://127.0.0.1:64871/healthz returned 200:
	ok
	I0601 04:04:16.573692   11761 api_server.go:140] control plane version: v1.23.6
	I0601 04:04:16.573705   11761 api_server.go:130] duration metric: took 8.489571ms to wait for apiserver health ...
	I0601 04:04:16.573712   11761 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:04:16.583258   11761 system_pods.go:59] 5 kube-system pods found
	I0601 04:04:16.583280   11761 system_pods.go:61] "etcd-kubernetes-upgrade-20220601035912-2342" [8ed14826-d80a-4fff-8b08-45975cbb0ff7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 04:04:16.583295   11761 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220601035912-2342" [d42c578f-6eec-46f1-9a2f-363f2ce281e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 04:04:16.583307   11761 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220601035912-2342" [a30c4324-3c26-415c-8001-1b776ad887b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 04:04:16.583315   11761 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220601035912-2342" [9a8809ec-ce4b-4556-a8fd-38a8dcca9df3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 04:04:16.583329   11761 system_pods.go:61] "storage-provisioner" [d297fcf8-7fe0-41b6-b0e9-b26cfaad9f0c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 04:04:16.583338   11761 system_pods.go:74] duration metric: took 9.62025ms to wait for pod list to return data ...
	I0601 04:04:16.583346   11761 kubeadm.go:572] duration metric: took 375.540743ms to wait for : map[apiserver:true system_pods:true] ...
	I0601 04:04:16.583357   11761 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:04:16.586742   11761 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:04:16.586764   11761 node_conditions.go:123] node cpu capacity is 6
	I0601 04:04:16.586776   11761 node_conditions.go:105] duration metric: took 3.414761ms to run NodePressure ...
	I0601 04:04:16.586785   11761 start.go:213] waiting for startup goroutines ...
	I0601 04:04:16.643274   11761 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:04:16.643288   11761 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:04:16.643341   11761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601035912-2342
	I0601 04:04:16.647026   11761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64867 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa Username:docker}
	I0601 04:04:16.739805   11761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64867 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601035912-2342/id_rsa Username:docker}
	I0601 04:04:16.749127   11761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:04:16.857706   11761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:04:17.635291   11761 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 04:04:17.641837   11761 addons.go:417] enableAddons completed in 1.433996956s
	I0601 04:04:17.672332   11761 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 04:04:17.694455   11761 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-20220601035912-2342" cluster and "default" namespace by default
	I0601 04:04:16.578409   11643 addons.go:153] Setting addon default-storageclass=true in "auto-20220601035306-2342"
	W0601 04:04:16.609166   11643 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:04:16.605003   11643 node_ready.go:35] waiting up to 5m0s for node "auto-20220601035306-2342" to be "Ready" ...
	I0601 04:04:16.609198   11643 host.go:66] Checking if "auto-20220601035306-2342" exists ...
	I0601 04:04:16.609215   11643 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:04:16.609225   11643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:04:16.609288   11643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601035306-2342
	I0601 04:04:16.610343   11643 cli_runner.go:164] Run: docker container inspect auto-20220601035306-2342 --format={{.State.Status}}
	I0601 04:04:16.626542   11643 node_ready.go:49] node "auto-20220601035306-2342" has status "Ready":"True"
	I0601 04:04:16.626558   11643 node_ready.go:38] duration metric: took 17.365106ms waiting for node "auto-20220601035306-2342" to be "Ready" ...
	I0601 04:04:16.626567   11643 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:04:16.635653   11643 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-dm549" in "kube-system" namespace to be "Ready" ...
	I0601 04:04:16.711791   11643 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:04:16.711810   11643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:04:16.711910   11643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220601035306-2342
	I0601 04:04:16.717200   11643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65193 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601035306-2342/id_rsa Username:docker}
	I0601 04:04:16.798929   11643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65193 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/auto-20220601035306-2342/id_rsa Username:docker}
	I0601 04:04:16.896552   11643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:04:17.098154   11643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:04:17.832556   11643 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.410042125s)
	I0601 04:04:17.832578   11643 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 04:04:17.933463   11643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.036857384s)
	I0601 04:04:17.957642   11643 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 04:04:17.999088   11643 addons.go:417] enableAddons completed in 1.672920222s
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:03:31 UTC, end at Wed 2022-06-01 11:04:19 UTC. --
	Jun 01 11:03:45 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:45.192724845Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 11:03:45 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:45.192839743Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 11:03:45 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:45.192906158Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 11:03:45 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:45.192953818Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 11:03:45 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:45.195851286Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 11:03:45 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:45.195939606Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 11:03:45 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:45.195991394Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 11:03:45 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:45.196037218Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 11:03:46 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:46.316430290Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 11:03:46 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:46.328893198Z" level=info msg="Loading containers: start."
	Jun 01 11:03:46 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:46.415331934Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 11:03:46 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:46.450082679Z" level=info msg="Loading containers: done."
	Jun 01 11:03:46 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:46.460941781Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 11:03:46 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:46.461043712Z" level=info msg="Daemon has completed initialization"
	Jun 01 11:03:46 kubernetes-upgrade-20220601035912-2342 systemd[1]: Started Docker Application Container Engine.
	Jun 01 11:03:46 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:46.488450276Z" level=info msg="API listen on [::]:2376"
	Jun 01 11:03:46 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:03:46.491919079Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 01 11:04:08 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:04:08.220988148Z" level=info msg="ignoring event" container=710553a169c53731814829d978caba051b613ddc78c9a7cfffdf3398109d5636 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:04:08 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:04:08.223424358Z" level=info msg="ignoring event" container=c4840e38337f303ee298eca1a5a5e6fd5240f90b9bcc7f1779767fe975c63f45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:04:08 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:04:08.223698333Z" level=info msg="ignoring event" container=9d92767c7168bbd462a748e0a16f9b3d3bcefc15226be349da61dbd0e7224ae2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:04:08 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:04:08.224077479Z" level=info msg="ignoring event" container=50a9c54651bad5e7602efdcb42ebfa0287b38f79ad9d66c66b7bbc96de23dbcc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:04:08 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:04:08.224596669Z" level=info msg="ignoring event" container=cfb051273270241baf92c8974449c2261d9b87164ea87047b13a610acdfd6716 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:04:08 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:04:08.227693578Z" level=info msg="ignoring event" container=97f6875d048a1f8a94aff88559dee8fcb64be1337a338049b6abba5a55c88158 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:04:09 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:04:09.213268930Z" level=info msg="ignoring event" container=5c6bb0ecbdf9feb529a7e3a050880ba023623b93adc7ad4b036302d96fddc686 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:04:09 kubernetes-upgrade-20220601035912-2342 dockerd[525]: time="2022-06-01T11:04:09.253204855Z" level=info msg="ignoring event" container=e4d072f09c4b876974fd954c7928013b8427d094b738999296a976c3ca8dac81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	5abb73bfc0377       8fa62c12256df       8 seconds ago       Running             kube-apiserver            1                   a0a0f11838b94
	3fc105585eb6b       df7b72818ad2e       8 seconds ago       Running             kube-controller-manager   1                   102d8a3d8fbe1
	8ec5631bb3117       25f8c7f3da61c       8 seconds ago       Running             etcd                      1                   b726a16d69783
	8a3ea70d4b9d8       595f327f224a4       8 seconds ago       Running             kube-scheduler            1                   5d28469ba92a7
	50a9c54651bad       25f8c7f3da61c       30 seconds ago      Exited              etcd                      0                   cfb0512732702
	5c6bb0ecbdf9f       595f327f224a4       30 seconds ago      Exited              kube-scheduler            0                   9d92767c7168b
	e4d072f09c4b8       8fa62c12256df       30 seconds ago      Exited              kube-apiserver            0                   710553a169c53
	97f6875d048a1       df7b72818ad2e       30 seconds ago      Exited              kube-controller-manager   0                   c4840e38337f3
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-20220601035912-2342
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-20220601035912-2342
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:03:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-20220601035912-2342
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:04:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:04:14 +0000   Wed, 01 Jun 2022 11:03:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:04:14 +0000   Wed, 01 Jun 2022 11:03:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:04:14 +0000   Wed, 01 Jun 2022 11:03:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 11:04:14 +0000   Wed, 01 Jun 2022 11:04:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    kubernetes-upgrade-20220601035912-2342
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                04b83f60-c863-4c08-a43e-f42304a65cef
	  Boot ID:                    f65ff030-0ce1-451f-b056-a175624cc17c
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-20220601035912-2342                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         25s
	  kube-system                 kube-apiserver-kubernetes-upgrade-20220601035912-2342             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-20220601035912-2342    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  kube-system                 kube-scheduler-kubernetes-upgrade-20220601035912-2342             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 31s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30s (x8 over 31s)  kubelet  Node kubernetes-upgrade-20220601035912-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x8 over 31s)  kubelet  Node kubernetes-upgrade-20220601035912-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x7 over 31s)  kubelet  Node kubernetes-upgrade-20220601035912-2342 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x4 over 8s)    kubelet  Node kubernetes-upgrade-20220601035912-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet  Node kubernetes-upgrade-20220601035912-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x3 over 8s)    kubelet  Node kubernetes-upgrade-20220601035912-2342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.001456] FS-Cache: O-key=[8] '1a55850300000000'
	[  +0.001119] FS-Cache: N-cookie c=00000000e6a5beda [p=00000000b0a9f61c fl=2 nc=0 na=1]
	[  +0.001800] FS-Cache: N-cookie d=00000000cf9b0095 n=00000000a7c12bf4
	[  +0.001489] FS-Cache: N-key=[8] '1a55850300000000'
	[  +0.002341] FS-Cache: Duplicate cookie detected
	[  +0.001051] FS-Cache: O-cookie c=00000000dda9b39f [p=00000000b0a9f61c fl=226 nc=0 na=1]
	[  +0.001790] FS-Cache: O-cookie d=00000000cf9b0095 n=00000000e9dcd918
	[  +0.001537] FS-Cache: O-key=[8] '1a55850300000000'
	[  +0.001114] FS-Cache: N-cookie c=00000000e6a5beda [p=00000000b0a9f61c fl=2 nc=0 na=1]
	[  +0.001838] FS-Cache: N-cookie d=00000000cf9b0095 n=00000000a59d38ab
	[  +0.001434] FS-Cache: N-key=[8] '1a55850300000000'
	[  +3.677007] FS-Cache: Duplicate cookie detected
	[  +0.001038] FS-Cache: O-cookie c=00000000a106af5f [p=00000000b0a9f61c fl=226 nc=0 na=1]
	[  +0.001807] FS-Cache: O-cookie d=00000000cf9b0095 n=000000007513b1d2
	[  +0.001597] FS-Cache: O-key=[8] '1955850300000000'
	[  +0.001172] FS-Cache: N-cookie c=00000000163774b8 [p=00000000b0a9f61c fl=2 nc=0 na=1]
	[  +0.001966] FS-Cache: N-cookie d=00000000cf9b0095 n=00000000a8817ec9
	[  +0.001503] FS-Cache: N-key=[8] '1955850300000000'
	[  +0.707476] FS-Cache: Duplicate cookie detected
	[  +0.001066] FS-Cache: O-cookie c=00000000558c30a4 [p=00000000b0a9f61c fl=226 nc=0 na=1]
	[  +0.001781] FS-Cache: O-cookie d=00000000cf9b0095 n=000000003898637e
	[  +0.001876] FS-Cache: O-key=[8] '2355850300000000'
	[  +0.001295] FS-Cache: N-cookie c=0000000080d67bd4 [p=00000000b0a9f61c fl=2 nc=0 na=1]
	[  +0.002050] FS-Cache: N-cookie d=00000000cf9b0095 n=0000000056102690
	[  +0.001550] FS-Cache: N-key=[8] '2355850300000000'
	
	* 
	* ==> etcd [50a9c54651ba] <==
	* {"level":"info","ts":"2022-06-01T11:03:50.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:03:50.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:03:50.229Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:kubernetes-upgrade-20220601035912-2342 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:03:50.229Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:03:50.229Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:03:50.230Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.231Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:03:50.231Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:03:50.231Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:03:50.231Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:03:50.233Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.233Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:03:50.233Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:04:03.277Z","caller":"traceutil/trace.go:171","msg":"trace[296478647] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"175.267374ms","start":"2022-06-01T11:04:03.102Z","end":"2022-06-01T11:04:03.277Z","steps":["trace[296478647] 'process raft request'  (duration: 115.146951ms)","trace[296478647] 'compare'  (duration: 59.686439ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T11:04:08.141Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-01T11:04:08.141Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"kubernetes-upgrade-20220601035912-2342","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/06/01 11:04:08 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/01 11:04:08 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-01T11:04:08.151Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-06-01T11:04:08.152Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:04:08.153Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:04:08.153Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"kubernetes-upgrade-20220601035912-2342","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [8ec5631bb311] <==
	* {"level":"info","ts":"2022-06-01T11:04:11.823Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-06-01T11:04:11.823Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-06-01T11:04:11.823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-01T11:04:11.823Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-01T11:04:11.823Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:04:11.823Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:04:11.825Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:04:11.825Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:04:11.825Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:04:11.825Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:04:11.825Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:04:13.315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-01T11:04:13.315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:04:13.315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:04:13.315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2022-06-01T11:04:13.315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-01T11:04:13.315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2022-06-01T11:04:13.315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-01T11:04:13.316Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:kubernetes-upgrade-20220601035912-2342 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:04:13.316Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:04:13.316Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:04:13.317Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:04:13.317Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:04:13.318Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:04:13.318Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  11:04:20 up 44 min,  0 users,  load average: 1.23, 0.95, 0.87
	Linux kubernetes-upgrade-20220601035912-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [5abb73bfc037] <==
	* I0601 11:04:14.815340       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0601 11:04:14.815355       1 controller.go:85] Starting OpenAPI controller
	I0601 11:04:14.815365       1 naming_controller.go:291] Starting NamingConditionController
	I0601 11:04:14.815374       1 establishing_controller.go:76] Starting EstablishingController
	I0601 11:04:14.815413       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0601 11:04:14.815446       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0601 11:04:14.815455       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0601 11:04:14.820096       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0601 11:04:14.842040       1 controller.go:157] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0601 11:04:14.901970       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0601 11:04:14.904850       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 11:04:14.914575       1 cache.go:39] Caches are synced for autoregister controller
	I0601 11:04:14.915384       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 11:04:14.915643       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0601 11:04:14.944947       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:04:15.004625       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 11:04:15.004678       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 11:04:15.800907       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 11:04:15.800952       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 11:04:15.804603       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 11:04:16.124986       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:04:16.130051       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:04:16.171916       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:04:16.192775       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:04:16.201281       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [e4d072f09c4b] <==
	* W0601 11:04:09.144947       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.144949       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.144966       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.144979       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.144990       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.144998       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145018       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145019       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145079       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145084       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145091       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145104       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145106       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145125       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145126       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145134       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145158       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145159       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145185       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145187       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145193       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145210       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145240       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145323       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:04:09.145719       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [3fc105585eb6] <==
	* I0601 11:04:17.235839       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for replicasets.apps
	I0601 11:04:17.235881       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
	I0601 11:04:17.235892       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for jobs.batch
	I0601 11:04:17.235903       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
	I0601 11:04:17.235921       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for podtemplates
	I0601 11:04:17.235937       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for deployments.apps
	I0601 11:04:17.235948       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for limitranges
	I0601 11:04:17.236023       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for daemonsets.apps
	I0601 11:04:17.236032       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for controllerrevisions.apps
	I0601 11:04:17.236048       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0601 11:04:17.236063       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0601 11:04:17.236126       1 controllermanager.go:605] Started "resourcequota"
	I0601 11:04:17.236153       1 resource_quota_controller.go:273] Starting resource quota controller
	I0601 11:04:17.236161       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0601 11:04:17.236176       1 resource_quota_monitor.go:308] QuotaMonitor running
	I0601 11:04:17.325338       1 controllermanager.go:605] Started "disruption"
	I0601 11:04:17.325396       1 disruption.go:363] Starting disruption controller
	I0601 11:04:17.325401       1 shared_informer.go:240] Waiting for caches to sync for disruption
	I0601 11:04:17.480560       1 controllermanager.go:605] Started "pv-protection"
	I0601 11:04:17.480625       1 pv_protection_controller.go:79] Starting PV protection controller
	I0601 11:04:17.480646       1 shared_informer.go:240] Waiting for caches to sync for PV protection
	I0601 11:04:17.674540       1 controllermanager.go:605] Started "horizontalpodautoscaling"
	I0601 11:04:17.674604       1 horizontal.go:168] Starting HPA controller
	I0601 11:04:17.674616       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I0601 11:04:17.724882       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [97f6875d048a] <==
	* I0601 11:03:57.973522       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0601 11:03:57.973568       1 controllermanager.go:605] Started "csrsigning"
	I0601 11:03:57.973595       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I0601 11:03:57.973654       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0601 11:03:57.973650       1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0601 11:03:57.973602       1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0601 11:03:58.122281       1 controllermanager.go:605] Started "tokencleaner"
	I0601 11:03:58.122326       1 tokencleaner.go:118] Starting token cleaner controller
	I0601 11:03:58.122331       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0601 11:03:58.122337       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0601 11:03:58.272646       1 controllermanager.go:605] Started "persistentvolume-binder"
	I0601 11:03:58.272672       1 pv_controller_base.go:310] Starting persistent volume controller
	I0601 11:03:58.272755       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0601 11:03:58.422439       1 controllermanager.go:605] Started "attachdetach"
	I0601 11:03:58.422574       1 attach_detach_controller.go:328] Starting attach detach controller
	I0601 11:03:58.422593       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0601 11:03:58.572384       1 controllermanager.go:605] Started "root-ca-cert-publisher"
	I0601 11:03:58.572511       1 publisher.go:107] Starting root CA certificate configmap publisher
	I0601 11:03:58.572537       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	I0601 11:03:58.871548       1 controllermanager.go:605] Started "horizontalpodautoscaling"
	I0601 11:03:58.871572       1 horizontal.go:168] Starting HPA controller
	I0601 11:03:58.871582       1 shared_informer.go:240] Waiting for caches to sync for HPA
	I0601 11:03:59.022959       1 controllermanager.go:605] Started "bootstrapsigner"
	I0601 11:03:59.022988       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0601 11:03:59.072086       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [5c6bb0ecbdf9] <==
	* E0601 11:03:53.033798       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:03:53.032134       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:03:53.033856       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:03:53.032313       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:03:53.033915       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:03:53.031534       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:03:53.034027       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:03:53.031771       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:03:53.034052       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:03:53.927396       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:03:53.927419       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:03:53.974563       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:03:53.974605       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:03:53.986762       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:03:53.986779       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:03:54.123069       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:03:54.123086       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:03:54.129350       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:03:54.129388       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:03:54.189824       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:03:54.189870       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:03:55.926756       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0601 11:04:08.159182       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 11:04:08.159435       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0601 11:04:08.159589       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [8a3ea70d4b9d] <==
	* I0601 11:04:12.421159       1 serving.go:348] Generated self-signed cert in-memory
	W0601 11:04:14.826271       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0601 11:04:14.826314       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 11:04:14.826323       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0601 11:04:14.826328       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0601 11:04:14.843509       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0601 11:04:14.897634       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0601 11:04:14.897977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0601 11:04:14.902398       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 11:04:14.902409       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W0601 11:04:14.909532       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 11:04:14.909610       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:04:14.910626       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:04:14.910662       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:04:14.911078       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0601 11:04:14.911129       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0601 11:04:14.911248       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:04:14.911331       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:04:14.911601       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0601 11:04:14.911663       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0601 11:04:14.911906       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:04:14.911964       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:04:15.903256       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:03:31 UTC, end at Wed 2022-06-01 11:04:21 UTC. --
	Jun 01 11:04:12 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:12.904768    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:13 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:13.005905    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:13 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:13.106897    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:13 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:13.208111    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:13 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:13.309192    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:13 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:13.409788    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:13 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:13.509929    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:13 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:13.610545    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:13 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:13.711040    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:13 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:13.811196    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:13 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:13.915580    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:14 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:14.016415    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:14 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:14.116802    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:14 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:14.217354    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:14 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:14.318311    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:14 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:14.419529    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:14 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:14.520422    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:14 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:14.621147    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:14 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:14.721748    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:14 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:14.823671    2591 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601035912-2342\" not found"
	Jun 01 11:04:14 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: I0601 11:04:14.927512    2591 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-20220601035912-2342"
	Jun 01 11:04:14 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: I0601 11:04:14.927692    2591 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-20220601035912-2342"
	Jun 01 11:04:15 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: I0601 11:04:15.099999    2591 apiserver.go:52] "Watching apiserver"
	Jun 01 11:04:15 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: I0601 11:04:15.224737    2591 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 11:04:15 kubernetes-upgrade-20220601035912-2342 kubelet[2591]: E0601 11:04:15.484404    2591 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-20220601035912-2342\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-20220601035912-2342"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220601035912-2342 -n kubernetes-upgrade-20220601035912-2342
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-20220601035912-2342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context kubernetes-upgrade-20220601035912-2342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.646847139s)
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-20220601035912-2342 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220601035912-2342 describe pod storage-provisioner: exit status 1 (46.61589ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-20220601035912-2342 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220601035912-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220601035912-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220601035912-2342: (3.231882978s)
--- FAIL: TestKubernetesUpgrade (314.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (55.39s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.624133609.exe start -p missing-upgrade-20220601035819-2342 --memory=2200 --driver=docker 
E0601 03:58:21.582019    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.624133609.exe start -p missing-upgrade-20220601035819-2342 --memory=2200 --driver=docker : exit status 78 (39.189947055s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220601035819-2342] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220601035819-2342
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-20220601035819-2342" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.61 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 136.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 190.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 248.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 296.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 340.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 455.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 508.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 10:58:33.483006829 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-20220601035819-2342" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 10:58:56.859682287 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.624133609.exe start -p missing-upgrade-20220601035819-2342 --memory=2200 --driver=docker 
E0601 03:59:02.546234    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.624133609.exe start -p missing-upgrade-20220601035819-2342 --memory=2200 --driver=docker : exit status 70 (4.669068909s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220601035819-2342] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220601035819-2342
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220601035819-2342" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.624133609.exe start -p missing-upgrade-20220601035819-2342 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.624133609.exe start -p missing-upgrade-20220601035819-2342 --memory=2200 --driver=docker : exit status 70 (4.819800387s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220601035819-2342] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220601035819-2342
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220601035819-2342" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-06-01 03:59:11.501072 -0700 PDT m=+2378.142722852
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220601035819-2342
helpers_test.go:235: (dbg) docker inspect missing-upgrade-20220601035819-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cf5aabab1a40a645ef5374c36ab50b003932d965dc3fa9420f90970b7ab9be69",
	        "Created": "2022-06-01T10:58:48.538057136Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 128450,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T10:58:48.769031293Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/cf5aabab1a40a645ef5374c36ab50b003932d965dc3fa9420f90970b7ab9be69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cf5aabab1a40a645ef5374c36ab50b003932d965dc3fa9420f90970b7ab9be69/hostname",
	        "HostsPath": "/var/lib/docker/containers/cf5aabab1a40a645ef5374c36ab50b003932d965dc3fa9420f90970b7ab9be69/hosts",
	        "LogPath": "/var/lib/docker/containers/cf5aabab1a40a645ef5374c36ab50b003932d965dc3fa9420f90970b7ab9be69/cf5aabab1a40a645ef5374c36ab50b003932d965dc3fa9420f90970b7ab9be69-json.log",
	        "Name": "/missing-upgrade-20220601035819-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20220601035819-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b43d4d57ef4c48f3fb30a4d041aaf8032b07eb63b8e2e9bd522fceb2b9954e74-init/diff:/var/lib/docker/overlay2/5a5021b04d40486c3f899d3d86469c69d0a0a3a6aedb4a262808e8e0e3212dd9/diff:/var/lib/docker/overlay2/34d2fad93be8a8b08db19932b165d6e4ee12c642f5b9a71ae0da16e41e895455/diff:/var/lib/docker/overlay2/a519d8b71fe163aad87235d12fd7596db7d55f7f2c546ea938ac5b44f16b652f/diff:/var/lib/docker/overlay2/2f15e48f7fd9f51c0246edf680b5bf5101d756e18f610fe615defe179c7ff534/diff:/var/lib/docker/overlay2/b3950a464734420ac98826fd7846d239d550db1d1ae773f32fd285af845cdf22/diff:/var/lib/docker/overlay2/8988ddfdbc34033c8f6dfbda80a939b635699c7799196fc6e1c67870aa3a98fe/diff:/var/lib/docker/overlay2/7ba0245eca92a262dcf5985ae53e44b4246b2148cf3041b19299c4824436c857/diff:/var/lib/docker/overlay2/6c8ceadb783c54050c9822b7a9c7e32f5c8c95922ec59c1027de2484daecd2b4/diff:/var/lib/docker/overlay2/35b8de062c6e2440d11c06c0221db2bc4763da7dcc75f1ff234a1a6620f908c0/diff:/var/lib/docker/overlay2/3584c2
bd1bdbc4f33ae8409b002bb9449ef69f5eac5efaf3029bafd8e59e616d/diff:/var/lib/docker/overlay2/89f35c1cfd5f4b4711c8faf3c75a939b4b42ad8280d52e46ed9174898ebd4dea/diff:/var/lib/docker/overlay2/ba52e45aa55684244ce68ffb6f37275e672a920729ea5be00e4cc02625a11336/diff:/var/lib/docker/overlay2/88f06922766e6932db8f1d9662f093b42c354676160da5d7d627df01138940d2/diff:/var/lib/docker/overlay2/e30f8690cf13147aeb6cc0f6af6a5cc429942a49d65fc69df4976e32002b2c9c/diff:/var/lib/docker/overlay2/a013d03dab2547e58c77f48109fc20ac70497dba6843d25ae3705c054244401e/diff:/var/lib/docker/overlay2/cdb70bf8140c088f0dea40152c2a2ce37a40912c2a58e90e93f143d49795084f/diff:/var/lib/docker/overlay2/65b836a39622281946b823eb252606e8e09382a0f51a3fd2000a31247d55db47/diff:/var/lib/docker/overlay2/ba32c157bb001a6bdee2dd25782f9072b8f2c1f17dd60711c5dc96767ca3633e/diff:/var/lib/docker/overlay2/ebafcf8827f052a7339d84dae13db8562e7c9ff8c83ab195475000d74a29cb36/diff:/var/lib/docker/overlay2/be3502d132a8b884468dd4a5bcd811e32bd090fb7b255d888e53c9d4014ba2e0/diff:/var/lib/d
ocker/overlay2/f3b71613f15fd8e9cf665f9751d01943a85c6e1f36bc8a4317db3788ca9a6d68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b43d4d57ef4c48f3fb30a4d041aaf8032b07eb63b8e2e9bd522fceb2b9954e74/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b43d4d57ef4c48f3fb30a4d041aaf8032b07eb63b8e2e9bd522fceb2b9954e74/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b43d4d57ef4c48f3fb30a4d041aaf8032b07eb63b8e2e9bd522fceb2b9954e74/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20220601035819-2342",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20220601035819-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20220601035819-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20220601035819-2342",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20220601035819-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c31b3ec8f904d3f6ed71c22295952ae7b5a903938b19d45f00e08fffd59b4e64",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62487"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62488"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62489"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c31b3ec8f904",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "6e2377848db99045a6d5d02eed04c3b8d5ce3197ce063527c810a403a22d1427",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "33a147822a1df2cb962e3f3391e1b8a7b8a9daf43edd77b0aa62a42bb8a73f1c",
	                    "EndpointID": "6e2377848db99045a6d5d02eed04c3b8d5ce3197ce063527c810a403a22d1427",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220601035819-2342 -n missing-upgrade-20220601035819-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220601035819-2342 -n missing-upgrade-20220601035819-2342: exit status 6 (422.883627ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 03:59:11.983642   10362 status.go:413] kubeconfig endpoint: extract IP: "missing-upgrade-20220601035819-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-20220601035819-2342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-20220601035819-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220601035819-2342

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220601035819-2342: (2.570976314s)
--- FAIL: TestMissingContainerUpgrade (55.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (45.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.208784549.exe start -p stopped-upgrade-20220601035914-2342 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.208784549.exe start -p stopped-upgrade-20220601035914-2342 --memory=2200 --vm-driver=docker : exit status 70 (33.996472503s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220601035914-2342] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig779343117
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 10:59:29.732125087 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-20220601035914-2342" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 10:59:46.655897991 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-20220601035914-2342", then "minikube start -p stopped-upgrade-20220601035914-2342 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 46.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 111.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 185.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 248.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 312.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 390.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 456.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 535.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 10:59:46.655897991 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.208784549.exe start -p stopped-upgrade-20220601035914-2342 --memory=2200 --vm-driver=docker 
E0601 03:59:52.100415    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.208784549.exe start -p stopped-upgrade-20220601035914-2342 --memory=2200 --vm-driver=docker : exit status 70 (4.576267618s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220601035914-2342] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig968730925
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220601035914-2342" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.208784549.exe start -p stopped-upgrade-20220601035914-2342 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.208784549.exe start -p stopped-upgrade-20220601035914-2342 --memory=2200 --vm-driver=docker : exit status 70 (4.566625307s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220601035914-2342] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1992487158
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220601035914-2342" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (45.60s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (62.81s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220601040007-2342 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220601040007-2342 --output=json --layout=cluster: exit status 2 (16.148631482s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220601040007-2342","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220601040007-2342","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:200: incorrect status code: 405
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220601040007-2342
helpers_test.go:235: (dbg) docker inspect pause-20220601040007-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4103804f1ef08c368f6c8564399d7791bafef0edf3e4aaa9eba7fe11db29826b",
	        "Created": "2022-06-01T11:00:14.200189152Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134540,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:00:14.505640624Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/4103804f1ef08c368f6c8564399d7791bafef0edf3e4aaa9eba7fe11db29826b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4103804f1ef08c368f6c8564399d7791bafef0edf3e4aaa9eba7fe11db29826b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4103804f1ef08c368f6c8564399d7791bafef0edf3e4aaa9eba7fe11db29826b/hosts",
	        "LogPath": "/var/lib/docker/containers/4103804f1ef08c368f6c8564399d7791bafef0edf3e4aaa9eba7fe11db29826b/4103804f1ef08c368f6c8564399d7791bafef0edf3e4aaa9eba7fe11db29826b-json.log",
	        "Name": "/pause-20220601040007-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-20220601040007-2342:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220601040007-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/de14af2ecde35c19c1789ec9567eb3db3e962ed9c00399452ff01069a2006fd4-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de14af2ecde35c19c1789ec9567eb3db3e962ed9c00399452ff01069a2006fd4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de14af2ecde35c19c1789ec9567eb3db3e962ed9c00399452ff01069a2006fd4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de14af2ecde35c19c1789ec9567eb3db3e962ed9c00399452ff01069a2006fd4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20220601040007-2342",
	                "Source": "/var/lib/docker/volumes/pause-20220601040007-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220601040007-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220601040007-2342",
	                "name.minikube.sigs.k8s.io": "pause-20220601040007-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "af61ff2ef024449572a42a05b077e24cf274b41395988a200c01c25e2b81a2e0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63801"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63802"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63803"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63804"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63800"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/af61ff2ef024",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220601040007-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4103804f1ef0",
	                        "pause-20220601040007-2342"
	                    ],
	                    "NetworkID": "073fdc1c0d62d4bc99431aed16a8e28515a6320afb111c80e0911fa7207cd54c",
	                    "EndpointID": "d5a00a9b46430a1acdd6562ec0a40f1637333d81f860d5b59f86a1c3a491d846",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220601040007-2342 -n pause-20220601040007-2342
E0601 04:01:49.048790    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220601040007-2342 -n pause-20220601040007-2342: exit status 2 (16.11444727s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-20220601040007-2342 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220601040007-2342 logs -n 25: (14.321444833s)
helpers_test.go:252: TestPause/serial/VerifyStatus logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |                Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                     | offline-docker-20220601035306-2342     | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:53 PDT | 01 Jun 22 03:53 PDT |
	|         | offline-docker-20220601035306-2342     |                                        |         |                |                     |                     |
	| start   | -p                                     | force-systemd-env-20220601035327-2342  | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:53 PDT | 01 Jun 22 03:53 PDT |
	|         | force-systemd-env-20220601035327-2342  |                                        |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr -v=5   |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| ssh     | force-systemd-env-20220601035327-2342  | force-systemd-env-20220601035327-2342  | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:53 PDT | 01 Jun 22 03:53 PDT |
	|         | ssh docker info --format               |                                        |         |                |                     |                     |
	|         | {{.CgroupDriver}}                      |                                        |         |                |                     |                     |
	| delete  | -p                                     | force-systemd-env-20220601035327-2342  | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:53 PDT | 01 Jun 22 03:53 PDT |
	|         | force-systemd-env-20220601035327-2342  |                                        |         |                |                     |                     |
	| start   | -p                                     | docker-flags-20220601035358-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:53 PDT | 01 Jun 22 03:54 PDT |
	|         | docker-flags-20220601035358-2342       |                                        |         |                |                     |                     |
	|         | --cache-images=false                   |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --install-addons=false                 |                                        |         |                |                     |                     |
	|         | --wait=false                           |                                        |         |                |                     |                     |
	|         | --docker-env=FOO=BAR                   |                                        |         |                |                     |                     |
	|         | --docker-env=BAZ=BAT                   |                                        |         |                |                     |                     |
	|         | --docker-opt=debug                     |                                        |         |                |                     |                     |
	|         | --docker-opt=icc=true                  |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=5                 |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| ssh     | docker-flags-20220601035358-2342       | docker-flags-20220601035358-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:54 PDT | 01 Jun 22 03:54 PDT |
	|         | ssh sudo systemctl show                |                                        |         |                |                     |                     |
	|         | docker --property=Environment          |                                        |         |                |                     |                     |
	|         | --no-pager                             |                                        |         |                |                     |                     |
	| ssh     | docker-flags-20220601035358-2342       | docker-flags-20220601035358-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:54 PDT | 01 Jun 22 03:54 PDT |
	|         | ssh sudo systemctl show docker         |                                        |         |                |                     |                     |
	|         | --property=ExecStart --no-pager        |                                        |         |                |                     |                     |
	| delete  | -p                                     | docker-flags-20220601035358-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:54 PDT | 01 Jun 22 03:54 PDT |
	|         | docker-flags-20220601035358-2342       |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-expiration-20220601035425-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:54 PDT | 01 Jun 22 03:54 PDT |
	|         | cert-expiration-20220601035425-2342    |                                        |         |                |                     |                     |
	|         | --memory=2048 --cert-expiration=3m     |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| start   | -p                                     | force-systemd-flag-20220601035353-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:53 PDT | 01 Jun 22 03:57 PDT |
	|         | force-systemd-flag-20220601035353-2342 |                                        |         |                |                     |                     |
	|         | --memory=2048 --force-systemd          |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker |                                        |         |                |                     |                     |
	|         |                                        |                                        |         |                |                     |                     |
	| ssh     | force-systemd-flag-20220601035353-2342 | force-systemd-flag-20220601035353-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:57 PDT | 01 Jun 22 03:57 PDT |
	|         | ssh docker info --format               |                                        |         |                |                     |                     |
	|         | {{.CgroupDriver}}                      |                                        |         |                |                     |                     |
	| delete  | -p                                     | force-systemd-flag-20220601035353-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:57 PDT | 01 Jun 22 03:57 PDT |
	|         | force-systemd-flag-20220601035353-2342 |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-expiration-20220601035425-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:57 PDT | 01 Jun 22 03:57 PDT |
	|         | cert-expiration-20220601035425-2342    |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --cert-expiration=8760h                |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| delete  | -p                                     | cert-expiration-20220601035425-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:57 PDT | 01 Jun 22 03:58 PDT |
	|         | cert-expiration-20220601035425-2342    |                                        |         |                |                     |                     |
	| start   | -p                                     | cert-options-20220601035748-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:57 PDT | 01 Jun 22 03:58 PDT |
	|         | cert-options-20220601035748-2342       |                                        |         |                |                     |                     |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                                        |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                                        |         |                |                     |                     |
	|         | --apiserver-names=localhost            |                                        |         |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                                        |         |                |                     |                     |
	|         | --apiserver-port=8555                  |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	|         | --apiserver-name=localhost             |                                        |         |                |                     |                     |
	| ssh     | cert-options-20220601035748-2342       | cert-options-20220601035748-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:58 PDT | 01 Jun 22 03:58 PDT |
	|         | ssh openssl x509 -text -noout -in      |                                        |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                                        |         |                |                     |                     |
	| ssh     | -p                                     | cert-options-20220601035748-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:58 PDT | 01 Jun 22 03:58 PDT |
	|         | cert-options-20220601035748-2342       |                                        |         |                |                     |                     |
	|         | -- sudo cat                            |                                        |         |                |                     |                     |
	|         | /etc/kubernetes/admin.conf             |                                        |         |                |                     |                     |
	| delete  | -p                                     | cert-options-20220601035748-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:58 PDT | 01 Jun 22 03:58 PDT |
	|         | cert-options-20220601035748-2342       |                                        |         |                |                     |                     |
	| delete  | -p                                     | running-upgrade-20220601035801-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:59 PDT | 01 Jun 22 03:59 PDT |
	|         | running-upgrade-20220601035801-2342    |                                        |         |                |                     |                     |
	| delete  | -p                                     | missing-upgrade-20220601035819-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 03:59 PDT | 01 Jun 22 03:59 PDT |
	|         | missing-upgrade-20220601035819-2342    |                                        |         |                |                     |                     |
	| logs    | -p                                     | stopped-upgrade-20220601035914-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:00 PDT | 01 Jun 22 04:00 PDT |
	|         | stopped-upgrade-20220601035914-2342    |                                        |         |                |                     |                     |
	| delete  | -p                                     | stopped-upgrade-20220601035914-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:00 PDT | 01 Jun 22 04:00 PDT |
	|         | stopped-upgrade-20220601035914-2342    |                                        |         |                |                     |                     |
	| start   | -p pause-20220601040007-2342           | pause-20220601040007-2342              | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:00 PDT | 01 Jun 22 04:01 PDT |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --install-addons=false                 |                                        |         |                |                     |                     |
	|         | --wait=all --driver=docker             |                                        |         |                |                     |                     |
	| start   | -p pause-20220601040007-2342           | pause-20220601040007-2342              | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:01 PDT | 01 Jun 22 04:01 PDT |
	|         | --alsologtostderr -v=1                 |                                        |         |                |                     |                     |
	|         | --driver=docker                        |                                        |         |                |                     |                     |
	| pause   | -p pause-20220601040007-2342           | pause-20220601040007-2342              | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:01 PDT | 01 Jun 22 04:01 PDT |
	|         | --alsologtostderr -v=5                 |                                        |         |                |                     |                     |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:01:24
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:01:24.118939   10933 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:01:24.119121   10933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:01:24.119127   10933 out.go:309] Setting ErrFile to fd 2...
	I0601 04:01:24.119131   10933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:01:24.119240   10933 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:01:24.119497   10933 out.go:303] Setting JSON to false
	I0601 04:01:24.135294   10933 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3654,"bootTime":1654077630,"procs":347,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:01:24.135416   10933 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:01:24.157717   10933 out.go:177] * [pause-20220601040007-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:01:24.185539   10933 notify.go:193] Checking for updates...
	I0601 04:01:24.210015   10933 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:01:24.231246   10933 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:01:24.252128   10933 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:01:24.273485   10933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:01:24.295400   10933 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:01:24.317668   10933 config.go:178] Loaded profile config "pause-20220601040007-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:01:24.318324   10933 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:01:24.390612   10933 docker.go:137] docker version: linux-20.10.14
	I0601 04:01:24.390763   10933 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:01:24.517944   10933 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:false NGoroutines:56 SystemTime:2022-06-01 11:01:24.453609929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:01:24.561532   10933 out.go:177] * Using the docker driver based on existing profile
	I0601 04:01:24.582337   10933 start.go:284] selected driver: docker
	I0601 04:01:24.582365   10933 start.go:806] validating driver "docker" against &{Name:pause-20220601040007-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220601040007-2342 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false}
	I0601 04:01:24.582490   10933 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:01:24.582827   10933 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:01:24.711067   10933 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:false NGoroutines:56 SystemTime:2022-06-01 11:01:24.648058671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:01:24.713165   10933 cni.go:95] Creating CNI manager for ""
	I0601 04:01:24.713186   10933 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:01:24.713216   10933 start_flags.go:306] config:
	{Name:pause-20220601040007-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220601040007-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:01:24.734951   10933 out.go:177] * Starting control plane node pause-20220601040007-2342 in cluster pause-20220601040007-2342
	I0601 04:01:24.755611   10933 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:01:24.776529   10933 out.go:177] * Pulling base image ...
	I0601 04:01:24.818692   10933 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:01:24.818732   10933 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:01:24.818768   10933 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 04:01:24.818789   10933 cache.go:57] Caching tarball of preloaded images
	I0601 04:01:24.818907   10933 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 04:01:24.819147   10933 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 04:01:24.819468   10933 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/config.json ...
	I0601 04:01:24.884099   10933 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:01:24.884119   10933 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:01:24.884131   10933 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:01:24.884188   10933 start.go:352] acquiring machines lock for pause-20220601040007-2342: {Name:mk89da9c9bdc476375dd0c4284347f4f3a304377 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:01:24.884270   10933 start.go:356] acquired machines lock for "pause-20220601040007-2342" in 59.043µs
	I0601 04:01:24.884286   10933 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:01:24.884296   10933 fix.go:55] fixHost starting: 
	I0601 04:01:24.884538   10933 cli_runner.go:164] Run: docker container inspect pause-20220601040007-2342 --format={{.State.Status}}
	I0601 04:01:24.955639   10933 fix.go:103] recreateIfNeeded on pause-20220601040007-2342: state=Running err=<nil>
	W0601 04:01:24.955676   10933 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:01:24.977451   10933 out.go:177] * Updating the running docker "pause-20220601040007-2342" container ...
	I0601 04:01:25.019316   10933 machine.go:88] provisioning docker machine ...
	I0601 04:01:25.019368   10933 ubuntu.go:169] provisioning hostname "pause-20220601040007-2342"
	I0601 04:01:25.019502   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:25.090920   10933 main.go:134] libmachine: Using SSH client type: native
	I0601 04:01:25.091124   10933 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63801 <nil> <nil>}
	I0601 04:01:25.091142   10933 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-20220601040007-2342 && echo "pause-20220601040007-2342" | sudo tee /etc/hostname
	I0601 04:01:25.217762   10933 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220601040007-2342
	
	I0601 04:01:25.217844   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:25.288146   10933 main.go:134] libmachine: Using SSH client type: native
	I0601 04:01:25.288308   10933 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63801 <nil> <nil>}
	I0601 04:01:25.288321   10933 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20220601040007-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220601040007-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20220601040007-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:01:25.404112   10933 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:01:25.404133   10933 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:01:25.404158   10933 ubuntu.go:177] setting up certificates
	I0601 04:01:25.404173   10933 provision.go:83] configureAuth start
	I0601 04:01:25.404236   10933 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220601040007-2342
	I0601 04:01:25.474654   10933 provision.go:138] copyHostCerts
	I0601 04:01:25.474732   10933 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:01:25.474743   10933 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:01:25.474852   10933 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:01:25.475049   10933 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:01:25.475058   10933 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:01:25.475119   10933 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:01:25.475291   10933 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:01:25.475298   10933 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:01:25.475354   10933 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:01:25.475464   10933 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.pause-20220601040007-2342 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220601040007-2342]
	I0601 04:01:25.617786   10933 provision.go:172] copyRemoteCerts
	I0601 04:01:25.617850   10933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:01:25.617894   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:25.688509   10933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63801 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601040007-2342/id_rsa Username:docker}
	I0601 04:01:25.772851   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0601 04:01:25.791014   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 04:01:25.808536   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:01:25.825473   10933 provision.go:86] duration metric: configureAuth took 421.286301ms
	I0601 04:01:25.825485   10933 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:01:25.825615   10933 config.go:178] Loaded profile config "pause-20220601040007-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:01:25.825667   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:25.896963   10933 main.go:134] libmachine: Using SSH client type: native
	I0601 04:01:25.897223   10933 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63801 <nil> <nil>}
	I0601 04:01:25.897234   10933 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:01:26.014555   10933 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:01:26.014572   10933 ubuntu.go:71] root file system type: overlay
	I0601 04:01:26.014740   10933 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:01:26.014805   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:26.086765   10933 main.go:134] libmachine: Using SSH client type: native
	I0601 04:01:26.086998   10933 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63801 <nil> <nil>}
	I0601 04:01:26.087048   10933 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:01:26.211193   10933 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:01:26.211293   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:26.284887   10933 main.go:134] libmachine: Using SSH client type: native
	I0601 04:01:26.285132   10933 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63801 <nil> <nil>}
	I0601 04:01:26.285147   10933 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:01:26.408295   10933 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:01:26.408310   10933 machine.go:91] provisioned docker machine in 1.388964668s
	I0601 04:01:26.408329   10933 start.go:306] post-start starting for "pause-20220601040007-2342" (driver="docker")
	I0601 04:01:26.408337   10933 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:01:26.408413   10933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:01:26.408467   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:26.479757   10933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63801 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601040007-2342/id_rsa Username:docker}
	I0601 04:01:26.565464   10933 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:01:26.569290   10933 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:01:26.569309   10933 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:01:26.569319   10933 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:01:26.569323   10933 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:01:26.569332   10933 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:01:26.569440   10933 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:01:26.569577   10933 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:01:26.569734   10933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:01:26.579380   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:01:26.596694   10933 start.go:309] post-start completed in 188.349072ms
	I0601 04:01:26.596770   10933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:01:26.596813   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:26.729845   10933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63801 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601040007-2342/id_rsa Username:docker}
	I0601 04:01:26.813528   10933 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:01:26.817822   10933 fix.go:57] fixHost completed within 1.933513214s
	I0601 04:01:26.817833   10933 start.go:81] releasing machines lock for "pause-20220601040007-2342", held for 1.933542796s
	I0601 04:01:26.817901   10933 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220601040007-2342
	I0601 04:01:26.888893   10933 ssh_runner.go:195] Run: systemctl --version
	I0601 04:01:26.888964   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:26.888970   10933 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:01:26.889045   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:26.967548   10933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63801 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601040007-2342/id_rsa Username:docker}
	I0601 04:01:26.970581   10933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63801 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601040007-2342/id_rsa Username:docker}
	I0601 04:01:27.184289   10933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:01:27.196697   10933 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:01:27.206626   10933 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:01:27.206681   10933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:01:27.216077   10933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:01:27.229266   10933 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:01:27.322886   10933 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:01:27.413723   10933 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:01:27.425491   10933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:01:27.512283   10933 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:01:27.522325   10933 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:01:27.557489   10933 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	W0601 04:01:27.376325   10396 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220601035912-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220601035912-2342 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 04:01:27.376368   10396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:01:27.818088   10396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:01:27.829667   10396 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:01:27.829725   10396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:01:27.840744   10396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:01:27.840771   10396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:01:27.636277   10933 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 04:01:27.636518   10933 cli_runner.go:164] Run: docker exec -t pause-20220601040007-2342 dig +short host.docker.internal
	I0601 04:01:27.765012   10933 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:01:27.765129   10933 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:01:27.772396   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:27.853461   10933 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:01:27.853536   10933 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:01:27.887225   10933 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 04:01:27.887263   10933 docker.go:541] Images already preloaded, skipping extraction
	I0601 04:01:27.887363   10933 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:01:27.920453   10933 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 04:01:27.920474   10933 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:01:27.920547   10933 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:01:28.006097   10933 cni.go:95] Creating CNI manager for ""
	I0601 04:01:28.006110   10933 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:01:28.006123   10933 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 04:01:28.006145   10933 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220601040007-2342 NodeName:pause-20220601040007-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minik
ube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:01:28.006262   10933 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "pause-20220601040007-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:01:28.006384   10933 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=pause-20220601040007-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:pause-20220601040007-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 04:01:28.006444   10933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 04:01:28.017548   10933 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:01:28.017618   10933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:01:28.026165   10933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0601 04:01:28.040894   10933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:01:28.056340   10933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2046 bytes)
	I0601 04:01:28.072280   10933 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:01:28.077730   10933 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342 for IP: 192.168.58.2
	I0601 04:01:28.077839   10933 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:01:28.077896   10933 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:01:28.077982   10933 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/client.key
	I0601 04:01:28.078056   10933 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/apiserver.key.cee25041
	I0601 04:01:28.078110   10933 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/proxy-client.key
	I0601 04:01:28.078325   10933 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:01:28.078368   10933 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:01:28.078382   10933 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:01:28.078414   10933 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:01:28.078449   10933 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:01:28.078478   10933 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:01:28.078544   10933 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:01:28.079181   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:01:28.101090   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 04:01:28.121350   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:01:28.142576   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 04:01:28.161707   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:01:28.182929   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:01:28.205392   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:01:28.225747   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:01:28.247770   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:01:28.266320   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:01:28.286311   10933 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:01:28.306471   10933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:01:28.320558   10933 ssh_runner.go:195] Run: openssl version
	I0601 04:01:28.327049   10933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:01:28.337081   10933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:01:28.341347   10933 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:01:28.341399   10933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:01:28.347684   10933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:01:28.357275   10933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:01:28.366889   10933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:01:28.371506   10933 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:01:28.371566   10933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:01:28.377624   10933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:01:28.386769   10933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:01:28.396994   10933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:01:28.403392   10933 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:01:28.403484   10933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:01:28.410185   10933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:01:28.418644   10933 kubeadm.go:395] StartCluster: {Name:pause-20220601040007-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220601040007-2342 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false}
	I0601 04:01:28.418752   10933 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:01:28.450774   10933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:01:28.460501   10933 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:01:28.460520   10933 kubeadm.go:626] restartCluster start
	I0601 04:01:28.460572   10933 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:01:28.469439   10933 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:01:28.469506   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:28.547458   10933 kubeconfig.go:92] found "pause-20220601040007-2342" server: "https://127.0.0.1:63800"
	I0601 04:01:28.547879   10933 kapi.go:59] client config for pause-20220601040007-2342: &rest.Config{Host:"https://127.0.0.1:63800", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/client.ke
y", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0601 04:01:28.548451   10933 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:01:28.558408   10933 api_server.go:165] Checking apiserver status ...
	I0601 04:01:28.558482   10933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:01:28.569681   10933 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1591/cgroup
	W0601 04:01:28.578031   10933 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1591/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:01:28.578051   10933 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63800/healthz ...
	I0601 04:01:28.584086   10933 api_server.go:266] https://127.0.0.1:63800/healthz returned 200:
	ok
	I0601 04:01:28.597019   10933 system_pods.go:86] 6 kube-system pods found
	I0601 04:01:28.597045   10933 system_pods.go:89] "coredns-64897985d-nqhq5" [94c28977-aa3a-4056-9638-16a4aa3caa88] Running
	I0601 04:01:28.597054   10933 system_pods.go:89] "etcd-pause-20220601040007-2342" [05c94542-bfb4-43c8-bece-6efacecbd629] Running
	I0601 04:01:28.597060   10933 system_pods.go:89] "kube-apiserver-pause-20220601040007-2342" [d375e5af-2518-4139-9e3c-bfe485db1f14] Running
	I0601 04:01:28.597069   10933 system_pods.go:89] "kube-controller-manager-pause-20220601040007-2342" [14a445e5-0b8b-426f-9f26-05fb1d351aab] Running
	I0601 04:01:28.597083   10933 system_pods.go:89] "kube-proxy-cz54p" [56211b6f-97d5-4174-921d-0934aa9e6194] Running
	I0601 04:01:28.597100   10933 system_pods.go:89] "kube-scheduler-pause-20220601040007-2342" [dfc9b626-ba06-429d-b4b6-77b290a80b20] Running
	I0601 04:01:28.598931   10933 api_server.go:140] control plane version: v1.23.6
	I0601 04:01:28.598969   10933 kubeadm.go:620] The running cluster does not require reconfiguration: 127.0.0.1
	I0601 04:01:28.598980   10933 kubeadm.go:674] Taking a shortcut, as the cluster seems to be properly configured
	I0601 04:01:28.598987   10933 kubeadm.go:630] restartCluster took 138.461128ms
	I0601 04:01:28.598995   10933 kubeadm.go:397] StartCluster complete in 180.35513ms
	I0601 04:01:28.599011   10933 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:01:28.599113   10933 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:01:28.599664   10933 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:01:28.600649   10933 kapi.go:59] client config for pause-20220601040007-2342: &rest.Config{Host:"https://127.0.0.1:63800", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/client.ke
y", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0601 04:01:28.604007   10933 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220601040007-2342" rescaled to 1
	I0601 04:01:28.604050   10933 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:01:28.604104   10933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:01:28.646346   10933 out.go:177] * Verifying Kubernetes components...
	I0601 04:01:28.604125   10933 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 04:01:28.604261   10933 config.go:178] Loaded profile config "pause-20220601040007-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:01:28.668164   10933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:01:28.668165   10933 addons.go:65] Setting default-storageclass=true in profile "pause-20220601040007-2342"
	I0601 04:01:28.668169   10933 addons.go:65] Setting storage-provisioner=true in profile "pause-20220601040007-2342"
	I0601 04:01:28.668186   10933 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220601040007-2342"
	I0601 04:01:28.668206   10933 addons.go:153] Setting addon storage-provisioner=true in "pause-20220601040007-2342"
	W0601 04:01:28.668217   10933 addons.go:165] addon storage-provisioner should already be in state true
	I0601 04:01:28.668299   10933 host.go:66] Checking if "pause-20220601040007-2342" exists ...
	I0601 04:01:28.668562   10933 cli_runner.go:164] Run: docker container inspect pause-20220601040007-2342 --format={{.State.Status}}
	I0601 04:01:28.671441   10933 cli_runner.go:164] Run: docker container inspect pause-20220601040007-2342 --format={{.State.Status}}
	I0601 04:01:28.716000   10933 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 04:01:28.716027   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:28.767124   10933 kapi.go:59] client config for pause-20220601040007-2342: &rest.Config{Host:"https://127.0.0.1:63800", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601040007-2342/client.ke
y", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0601 04:01:28.770744   10933 addons.go:153] Setting addon default-storageclass=true in "pause-20220601040007-2342"
	W0601 04:01:28.770757   10933 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:01:28.770772   10933 host.go:66] Checking if "pause-20220601040007-2342" exists ...
	I0601 04:01:28.791954   10933 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 04:01:28.771139   10933 cli_runner.go:164] Run: docker container inspect pause-20220601040007-2342 --format={{.State.Status}}
	I0601 04:01:28.812834   10933 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:01:28.812855   10933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:01:28.812958   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:28.830461   10933 node_ready.go:35] waiting up to 6m0s for node "pause-20220601040007-2342" to be "Ready" ...
	I0601 04:01:28.834371   10933 node_ready.go:49] node "pause-20220601040007-2342" has status "Ready":"True"
	I0601 04:01:28.834381   10933 node_ready.go:38] duration metric: took 3.885084ms waiting for node "pause-20220601040007-2342" to be "Ready" ...
	I0601 04:01:28.834389   10933 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:01:28.839734   10933 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-nqhq5" in "kube-system" namespace to be "Ready" ...
	I0601 04:01:28.846205   10933 pod_ready.go:92] pod "coredns-64897985d-nqhq5" in "kube-system" namespace has status "Ready":"True"
	I0601 04:01:28.846221   10933 pod_ready.go:81] duration metric: took 6.472451ms waiting for pod "coredns-64897985d-nqhq5" in "kube-system" namespace to be "Ready" ...
	I0601 04:01:28.846238   10933 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220601040007-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:01:28.855514   10933 pod_ready.go:92] pod "etcd-pause-20220601040007-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:01:28.855558   10933 pod_ready.go:81] duration metric: took 9.289165ms waiting for pod "etcd-pause-20220601040007-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:01:28.855565   10933 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220601040007-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:01:28.862628   10933 pod_ready.go:92] pod "kube-apiserver-pause-20220601040007-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:01:28.862641   10933 pod_ready.go:81] duration metric: took 7.069721ms waiting for pod "kube-apiserver-pause-20220601040007-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:01:28.862650   10933 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220601040007-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:01:28.869544   10933 pod_ready.go:92] pod "kube-controller-manager-pause-20220601040007-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:01:28.869554   10933 pod_ready.go:81] duration metric: took 6.898508ms waiting for pod "kube-controller-manager-pause-20220601040007-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:01:28.869562   10933 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cz54p" in "kube-system" namespace to be "Ready" ...
	I0601 04:01:28.879708   10933 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:01:28.879721   10933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:01:28.879781   10933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220601040007-2342
	I0601 04:01:28.899578   10933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63801 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601040007-2342/id_rsa Username:docker}
	I0601 04:01:28.958837   10933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63801 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601040007-2342/id_rsa Username:docker}
	I0601 04:01:29.002894   10933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:01:29.061015   10933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:01:29.234909   10933 pod_ready.go:92] pod "kube-proxy-cz54p" in "kube-system" namespace has status "Ready":"True"
	I0601 04:01:29.234920   10933 pod_ready.go:81] duration metric: took 365.350068ms waiting for pod "kube-proxy-cz54p" in "kube-system" namespace to be "Ready" ...
	I0601 04:01:29.234928   10933 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220601040007-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:01:29.277688   10933 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 04:01:28.668091   10396 out.go:204]   - Generating certificates and keys ...
	I0601 04:01:29.298585   10933 addons.go:417] enableAddons completed in 694.458982ms
	I0601 04:01:29.634164   10933 pod_ready.go:92] pod "kube-scheduler-pause-20220601040007-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:01:29.634191   10933 pod_ready.go:81] duration metric: took 399.254955ms waiting for pod "kube-scheduler-pause-20220601040007-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:01:29.634198   10933 pod_ready.go:38] duration metric: took 799.789716ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:01:29.634219   10933 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:01:29.634266   10933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:01:29.651173   10933 api_server.go:71] duration metric: took 1.047074618s to wait for apiserver process to appear ...
	I0601 04:01:29.651197   10933 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:01:29.651207   10933 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63800/healthz ...
	I0601 04:01:29.659562   10933 api_server.go:266] https://127.0.0.1:63800/healthz returned 200:
	ok
	I0601 04:01:29.661546   10933 api_server.go:140] control plane version: v1.23.6
	I0601 04:01:29.661558   10933 api_server.go:130] duration metric: took 10.355591ms to wait for apiserver health ...
	I0601 04:01:29.661565   10933 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:01:29.836194   10933 system_pods.go:59] 7 kube-system pods found
	I0601 04:01:29.836210   10933 system_pods.go:61] "coredns-64897985d-nqhq5" [94c28977-aa3a-4056-9638-16a4aa3caa88] Running
	I0601 04:01:29.836214   10933 system_pods.go:61] "etcd-pause-20220601040007-2342" [05c94542-bfb4-43c8-bece-6efacecbd629] Running
	I0601 04:01:29.836219   10933 system_pods.go:61] "kube-apiserver-pause-20220601040007-2342" [d375e5af-2518-4139-9e3c-bfe485db1f14] Running
	I0601 04:01:29.836223   10933 system_pods.go:61] "kube-controller-manager-pause-20220601040007-2342" [14a445e5-0b8b-426f-9f26-05fb1d351aab] Running
	I0601 04:01:29.836226   10933 system_pods.go:61] "kube-proxy-cz54p" [56211b6f-97d5-4174-921d-0934aa9e6194] Running
	I0601 04:01:29.836230   10933 system_pods.go:61] "kube-scheduler-pause-20220601040007-2342" [dfc9b626-ba06-429d-b4b6-77b290a80b20] Running
	I0601 04:01:29.836236   10933 system_pods.go:61] "storage-provisioner" [515509fa-19b9-4a23-a9d4-a5e49edca40a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 04:01:29.836241   10933 system_pods.go:74] duration metric: took 174.67047ms to wait for pod list to return data ...
	I0601 04:01:29.836246   10933 default_sa.go:34] waiting for default service account to be created ...
	I0601 04:01:30.033709   10933 default_sa.go:45] found service account: "default"
	I0601 04:01:30.033720   10933 default_sa.go:55] duration metric: took 197.468172ms for default service account to be created ...
	I0601 04:01:30.033725   10933 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 04:01:30.239201   10933 system_pods.go:86] 7 kube-system pods found
	I0601 04:01:30.239214   10933 system_pods.go:89] "coredns-64897985d-nqhq5" [94c28977-aa3a-4056-9638-16a4aa3caa88] Running
	I0601 04:01:30.239219   10933 system_pods.go:89] "etcd-pause-20220601040007-2342" [05c94542-bfb4-43c8-bece-6efacecbd629] Running
	I0601 04:01:30.239222   10933 system_pods.go:89] "kube-apiserver-pause-20220601040007-2342" [d375e5af-2518-4139-9e3c-bfe485db1f14] Running
	I0601 04:01:30.239228   10933 system_pods.go:89] "kube-controller-manager-pause-20220601040007-2342" [14a445e5-0b8b-426f-9f26-05fb1d351aab] Running
	I0601 04:01:30.239231   10933 system_pods.go:89] "kube-proxy-cz54p" [56211b6f-97d5-4174-921d-0934aa9e6194] Running
	I0601 04:01:30.239235   10933 system_pods.go:89] "kube-scheduler-pause-20220601040007-2342" [dfc9b626-ba06-429d-b4b6-77b290a80b20] Running
	I0601 04:01:30.239245   10933 system_pods.go:89] "storage-provisioner" [515509fa-19b9-4a23-a9d4-a5e49edca40a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 04:01:30.239251   10933 system_pods.go:126] duration metric: took 205.521853ms to wait for k8s-apps to be running ...
	I0601 04:01:30.239256   10933 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 04:01:30.239304   10933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:01:30.249367   10933 system_svc.go:56] duration metric: took 10.106771ms WaitForService to wait for kubelet.
	I0601 04:01:30.249378   10933 kubeadm.go:572] duration metric: took 1.645278771s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 04:01:30.249396   10933 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:01:30.436579   10933 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:01:30.436598   10933 node_conditions.go:123] node cpu capacity is 6
	I0601 04:01:30.436610   10933 node_conditions.go:105] duration metric: took 187.20677ms to run NodePressure ...
	I0601 04:01:30.436618   10933 start.go:213] waiting for startup goroutines ...
	I0601 04:01:30.467003   10933 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 04:01:30.490807   10933 out.go:177] * Done! kubectl is now configured to use "pause-20220601040007-2342" cluster and "default" namespace by default
	I0601 04:01:29.699917   10396 out.go:204]   - Booting up control plane ...
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:00:14 UTC, end at Wed 2022-06-01 11:02:04 UTC. --
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[128]: time="2022-06-01T11:00:17.154859830Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[128]: time="2022-06-01T11:00:17.155362190Z" level=info msg="Daemon shutdown complete"
	Jun 01 11:00:17 pause-20220601040007-2342 systemd[1]: docker.service: Succeeded.
	Jun 01 11:00:17 pause-20220601040007-2342 systemd[1]: Stopped Docker Application Container Engine.
	Jun 01 11:00:17 pause-20220601040007-2342 systemd[1]: Starting Docker Application Container Engine...
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.200773653Z" level=info msg="Starting up"
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.202533639Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.202586078Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.202604472Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.202611886Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.203784885Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.203818769Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.203832484Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.203840422Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.207950748Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.211786429Z" level=info msg="Loading containers: start."
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.285518316Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.315828010Z" level=info msg="Loading containers: done."
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.326237960Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.326296562Z" level=info msg="Daemon has completed initialization"
	Jun 01 11:00:17 pause-20220601040007-2342 systemd[1]: Started Docker Application Container Engine.
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.355672347Z" level=info msg="API listen on [::]:2376"
	Jun 01 11:00:17 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:17.358868412Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 01 11:00:53 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:53.082545719Z" level=info msg="ignoring event" container=4995b076bc8754ab1774c53ec4704c059e9d402d34d731328912397d187d5744 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:00:53 pause-20220601040007-2342 dockerd[381]: time="2022-06-01T11:00:53.125160256Z" level=info msg="ignoring event" container=bbd13bc5343f96eaacdda32d0097642b05ccbc19a1a3879bfbf8af21f6437e33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* time="2022-06-01T11:02:06Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE                  COMMAND                  CREATED              STATUS                       PORTS     NAMES
	831162f760ef   6e38f40d628d           "/storage-provisioner"   37 seconds ago       Up 36 seconds (Paused)                 k8s_storage-provisioner_storage-provisioner_kube-system_515509fa-19b9-4a23-a9d4-a5e49edca40a_0
	3de8c1a05653   k8s.gcr.io/pause:3.6   "/pause"                 37 seconds ago       Up 36 seconds (Paused)                 k8s_POD_storage-provisioner_kube-system_515509fa-19b9-4a23-a9d4-a5e49edca40a_0
	796c86ea807a   a4ca41631cc7           "/coredns -conf /etc…"   About a minute ago   Up About a minute (Paused)             k8s_coredns_coredns-64897985d-nqhq5_kube-system_94c28977-aa3a-4056-9638-16a4aa3caa88_0
	bf9bc7a4404f   4c0375452406           "/usr/local/bin/kube…"   About a minute ago   Up About a minute (Paused)             k8s_kube-proxy_kube-proxy-cz54p_kube-system_56211b6f-97d5-4174-921d-0934aa9e6194_0
	f6314160e333   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-proxy-cz54p_kube-system_56211b6f-97d5-4174-921d-0934aa9e6194_0
	418b07d95854   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_coredns-64897985d-nqhq5_kube-system_94c28977-aa3a-4056-9638-16a4aa3caa88_0
	80a3bfad9955   df7b72818ad2           "kube-controller-man…"   About a minute ago   Up About a minute (Paused)             k8s_kube-controller-manager_kube-controller-manager-pause-20220601040007-2342_kube-system_17ed751df3ddd1538ed486ee5e9e9a97_0
	614400944e46   8fa62c12256d           "kube-apiserver --ad…"   About a minute ago   Up About a minute (Paused)             k8s_kube-apiserver_kube-apiserver-pause-20220601040007-2342_kube-system_ec028a3542b1d931461404042f6dc40b_0
	232ab74a8378   25f8c7f3da61           "etcd --advertise-cl…"   About a minute ago   Up About a minute (Paused)             k8s_etcd_etcd-pause-20220601040007-2342_kube-system_632d528a048f552feffb800a74378edc_0
	fba809fafc2e   595f327f224a           "kube-scheduler --au…"   About a minute ago   Up About a minute (Paused)             k8s_kube-scheduler_kube-scheduler-pause-20220601040007-2342_kube-system_ce76828824eadb7ceea93f758197600e_0
	70fb84988afe   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-controller-manager-pause-20220601040007-2342_kube-system_17ed751df3ddd1538ed486ee5e9e9a97_0
	b32349ee6c54   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-apiserver-pause-20220601040007-2342_kube-system_ec028a3542b1d931461404042f6dc40b_0
	03e0ae13cf06   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_etcd-pause-20220601040007-2342_kube-system_632d528a048f552feffb800a74378edc_0
	d2354cc8aedf   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-scheduler-pause-20220601040007-2342_kube-system_ce76828824eadb7ceea93f758197600e_0
	
	* 
	* ==> coredns [796c86ea807a] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001456] FS-Cache: O-key=[8] '1a55850300000000'
	[  +0.001119] FS-Cache: N-cookie c=00000000e6a5beda [p=00000000b0a9f61c fl=2 nc=0 na=1]
	[  +0.001800] FS-Cache: N-cookie d=00000000cf9b0095 n=00000000a7c12bf4
	[  +0.001489] FS-Cache: N-key=[8] '1a55850300000000'
	[  +0.002341] FS-Cache: Duplicate cookie detected
	[  +0.001051] FS-Cache: O-cookie c=00000000dda9b39f [p=00000000b0a9f61c fl=226 nc=0 na=1]
	[  +0.001790] FS-Cache: O-cookie d=00000000cf9b0095 n=00000000e9dcd918
	[  +0.001537] FS-Cache: O-key=[8] '1a55850300000000'
	[  +0.001114] FS-Cache: N-cookie c=00000000e6a5beda [p=00000000b0a9f61c fl=2 nc=0 na=1]
	[  +0.001838] FS-Cache: N-cookie d=00000000cf9b0095 n=00000000a59d38ab
	[  +0.001434] FS-Cache: N-key=[8] '1a55850300000000'
	[  +3.677007] FS-Cache: Duplicate cookie detected
	[  +0.001038] FS-Cache: O-cookie c=00000000a106af5f [p=00000000b0a9f61c fl=226 nc=0 na=1]
	[  +0.001807] FS-Cache: O-cookie d=00000000cf9b0095 n=000000007513b1d2
	[  +0.001597] FS-Cache: O-key=[8] '1955850300000000'
	[  +0.001172] FS-Cache: N-cookie c=00000000163774b8 [p=00000000b0a9f61c fl=2 nc=0 na=1]
	[  +0.001966] FS-Cache: N-cookie d=00000000cf9b0095 n=00000000a8817ec9
	[  +0.001503] FS-Cache: N-key=[8] '1955850300000000'
	[  +0.707476] FS-Cache: Duplicate cookie detected
	[  +0.001066] FS-Cache: O-cookie c=00000000558c30a4 [p=00000000b0a9f61c fl=226 nc=0 na=1]
	[  +0.001781] FS-Cache: O-cookie d=00000000cf9b0095 n=000000003898637e
	[  +0.001876] FS-Cache: O-key=[8] '2355850300000000'
	[  +0.001295] FS-Cache: N-cookie c=0000000080d67bd4 [p=00000000b0a9f61c fl=2 nc=0 na=1]
	[  +0.002050] FS-Cache: N-cookie d=00000000cf9b0095 n=0000000056102690
	[  +0.001550] FS-Cache: N-key=[8] '2355850300000000'
	
	* 
	* ==> etcd [232ab74a8378] <==
	* {"level":"info","ts":"2022-06-01T11:00:24.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-06-01T11:00:24.150Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-06-01T11:00:24.152Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:00:24.152Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T11:00:24.152Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T11:00:24.152Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:00:24.152Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:00:24.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:00:24.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:00:24.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-01T11:00:24.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:00:24.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T11:00:24.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:00:24.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T11:00:24.601Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:00:24.601Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:pause-20220601040007-2342 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:00:24.601Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:00:24.602Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:00:24.602Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:00:24.602Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:00:24.602Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:00:24.603Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-01T11:00:24.603Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:00:24.648Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:00:24.648Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  11:02:17 up 42 min,  0 users,  load average: 0.45, 0.81, 0.83
	Linux pause-20220601040007-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [614400944e46] <==
	* I0601 11:00:26.459601       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 11:00:26.459665       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 11:00:26.459709       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 11:00:26.459752       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0601 11:00:26.461876       1 cache.go:39] Caches are synced for autoregister controller
	I0601 11:00:26.464733       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 11:00:27.359885       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 11:00:27.360036       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 11:00:27.364412       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0601 11:00:27.366797       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0601 11:00:27.366846       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 11:00:27.690582       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:00:27.714589       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:00:27.812104       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:00:27.816132       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0601 11:00:27.817111       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:00:27.820082       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:00:28.499279       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:00:29.381109       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:00:29.386755       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:00:29.397526       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:00:29.560625       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:00:41.856777       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:00:42.157054       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:00:43.488509       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [80a3bfad9955] <==
	* I0601 11:00:41.352491       1 shared_informer.go:247] Caches are synced for GC 
	I0601 11:00:41.354475       1 shared_informer.go:247] Caches are synced for cronjob 
	I0601 11:00:41.362859       1 shared_informer.go:247] Caches are synced for namespace 
	I0601 11:00:41.367289       1 shared_informer.go:247] Caches are synced for job 
	I0601 11:00:41.373577       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0601 11:00:41.381074       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:00:41.398126       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0601 11:00:41.400629       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0601 11:00:41.401972       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0601 11:00:41.402029       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0601 11:00:41.402050       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0601 11:00:41.521804       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 11:00:41.548014       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:00:41.549506       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0601 11:00:41.550838       1 shared_informer.go:247] Caches are synced for endpoint 
	I0601 11:00:41.584669       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:00:41.860846       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cz54p"
	I0601 11:00:42.011162       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:00:42.048258       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:00:42.048272       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:00:42.159157       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 11:00:42.168733       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:00:42.358604       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-59pfz"
	I0601 11:00:42.362242       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-nqhq5"
	I0601 11:00:42.378053       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-59pfz"
	
	* 
	* ==> kube-proxy [bf9bc7a4404f] <==
	* I0601 11:00:43.457837       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0601 11:00:43.457919       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0601 11:00:43.457942       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:00:43.483043       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:00:43.483256       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:00:43.483347       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:00:43.483530       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:00:43.484620       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:00:43.485144       1 config.go:317] "Starting service config controller"
	I0601 11:00:43.485210       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:00:43.485228       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:00:43.485231       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:00:43.586048       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:00:43.586105       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [fba809fafc2e] <==
	* E0601 11:00:26.407954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:00:26.408127       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:00:26.408145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:00:26.406766       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:00:26.407927       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:00:26.408309       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:00:26.408435       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:00:26.408448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:00:26.408544       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:00:26.408561       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:00:26.408831       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:00:26.408846       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:00:26.408902       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:00:26.408856       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:00:27.258982       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:00:27.259019       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:00:27.266667       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:00:27.266713       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:00:27.331234       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:00:27.331285       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:00:27.383231       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:00:27.383265       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:00:27.439463       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:00:27.439500       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:00:30.303282       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:00:14 UTC, end at Wed 2022-06-01 11:02:17 UTC. --
	Jun 01 11:00:43 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:43.084788    1778 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f6314160e333c54575ef3af762b53a7e43e6cf78dcb570481caba484902d4bc8"
	Jun 01 11:00:43 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:43.193374    1778 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-nqhq5 through plugin: invalid network status for"
	Jun 01 11:00:43 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:43.193564    1778 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="418b07d95854d213d7d38650320e831930cb5ede0c4554106c101869a832d7c3"
	Jun 01 11:00:43 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:43.203009    1778 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-59pfz through plugin: invalid network status for"
	Jun 01 11:00:44 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:44.217330    1778 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-nqhq5 through plugin: invalid network status for"
	Jun 01 11:00:44 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:44.220966    1778 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-59pfz through plugin: invalid network status for"
	Jun 01 11:00:53 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:53.217785    1778 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9d06607-ea99-4968-8724-271678577d5f-config-volume\") pod \"c9d06607-ea99-4968-8724-271678577d5f\" (UID: \"c9d06607-ea99-4968-8724-271678577d5f\") "
	Jun 01 11:00:53 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:53.217914    1778 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm4zg\" (UniqueName: \"kubernetes.io/projected/c9d06607-ea99-4968-8724-271678577d5f-kube-api-access-wm4zg\") pod \"c9d06607-ea99-4968-8724-271678577d5f\" (UID: \"c9d06607-ea99-4968-8724-271678577d5f\") "
	Jun 01 11:00:53 pause-20220601040007-2342 kubelet[1778]: W0601 11:00:53.218018    1778 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/c9d06607-ea99-4968-8724-271678577d5f/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jun 01 11:00:53 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:53.218155    1778 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9d06607-ea99-4968-8724-271678577d5f-config-volume" (OuterVolumeSpecName: "config-volume") pod "c9d06607-ea99-4968-8724-271678577d5f" (UID: "c9d06607-ea99-4968-8724-271678577d5f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jun 01 11:00:53 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:53.220002    1778 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9d06607-ea99-4968-8724-271678577d5f-kube-api-access-wm4zg" (OuterVolumeSpecName: "kube-api-access-wm4zg") pod "c9d06607-ea99-4968-8724-271678577d5f" (UID: "c9d06607-ea99-4968-8724-271678577d5f"). InnerVolumeSpecName "kube-api-access-wm4zg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 01 11:00:53 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:53.253667    1778 scope.go:110] "RemoveContainer" containerID="4995b076bc8754ab1774c53ec4704c059e9d402d34d731328912397d187d5744"
	Jun 01 11:00:53 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:53.260903    1778 scope.go:110] "RemoveContainer" containerID="4995b076bc8754ab1774c53ec4704c059e9d402d34d731328912397d187d5744"
	Jun 01 11:00:53 pause-20220601040007-2342 kubelet[1778]: E0601 11:00:53.261908    1778 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 4995b076bc8754ab1774c53ec4704c059e9d402d34d731328912397d187d5744" containerID="4995b076bc8754ab1774c53ec4704c059e9d402d34d731328912397d187d5744"
	Jun 01 11:00:53 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:53.261980    1778 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:4995b076bc8754ab1774c53ec4704c059e9d402d34d731328912397d187d5744} err="failed to get container status \"4995b076bc8754ab1774c53ec4704c059e9d402d34d731328912397d187d5744\": rpc error: code = Unknown desc = Error: No such container: 4995b076bc8754ab1774c53ec4704c059e9d402d34d731328912397d187d5744"
	Jun 01 11:00:53 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:53.318723    1778 reconciler.go:300] "Volume detached for volume \"kube-api-access-wm4zg\" (UniqueName: \"kubernetes.io/projected/c9d06607-ea99-4968-8724-271678577d5f-kube-api-access-wm4zg\") on node \"pause-20220601040007-2342\" DevicePath \"\""
	Jun 01 11:00:53 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:53.318770    1778 reconciler.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9d06607-ea99-4968-8724-271678577d5f-config-volume\") on node \"pause-20220601040007-2342\" DevicePath \"\""
	Jun 01 11:00:53 pause-20220601040007-2342 kubelet[1778]: I0601 11:00:53.760958    1778 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=c9d06607-ea99-4968-8724-271678577d5f path="/var/lib/kubelet/pods/c9d06607-ea99-4968-8724-271678577d5f/volumes"
	Jun 01 11:01:29 pause-20220601040007-2342 kubelet[1778]: I0601 11:01:29.226522    1778 topology_manager.go:200] "Topology Admit Handler"
	Jun 01 11:01:29 pause-20220601040007-2342 kubelet[1778]: I0601 11:01:29.362244    1778 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg6lb\" (UniqueName: \"kubernetes.io/projected/515509fa-19b9-4a23-a9d4-a5e49edca40a-kube-api-access-gg6lb\") pod \"storage-provisioner\" (UID: \"515509fa-19b9-4a23-a9d4-a5e49edca40a\") " pod="kube-system/storage-provisioner"
	Jun 01 11:01:29 pause-20220601040007-2342 kubelet[1778]: I0601 11:01:29.362280    1778 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/515509fa-19b9-4a23-a9d4-a5e49edca40a-tmp\") pod \"storage-provisioner\" (UID: \"515509fa-19b9-4a23-a9d4-a5e49edca40a\") " pod="kube-system/storage-provisioner"
	Jun 01 11:01:31 pause-20220601040007-2342 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jun 01 11:01:31 pause-20220601040007-2342 systemd[1]: kubelet.service: Succeeded.
	Jun 01 11:01:31 pause-20220601040007-2342 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 11:01:31 pause-20220601040007-2342 systemd[1]: kubelet.service: Consumed 1.903s CPU time.
	
	* 
	* ==> storage-provisioner [831162f760ef] <==
	* I0601 11:01:29.807171       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 11:01:29.817271       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 11:01:29.817306       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 11:01:29.825996       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 11:01:29.826112       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220601040007-2342_0922f0ca-7b86-454f-acfe-2eb67e320410!
	I0601 11:01:29.826782       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79a77f91-b152-47b0-86e9-43dc13acce73", APIVersion:"v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220601040007-2342_0922f0ca-7b86-454f-acfe-2eb67e320410 became leader
	I0601 11:01:29.926227       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220601040007-2342_0922f0ca-7b86-454f-acfe-2eb67e320410!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 04:02:16.894040   11044 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220601040007-2342 -n pause-20220601040007-2342
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220601040007-2342 -n pause-20220601040007-2342: exit status 2 (16.11854935s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-20220601040007-2342" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/VerifyStatus (62.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (250.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220601040844-2342 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220601040844-2342 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m9.897997593s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220601040844-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node old-k8s-version-20220601040844-2342 in cluster old-k8s-version-20220601040844-2342
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 04:08:44.907842   13010 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:08:44.908059   13010 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:08:44.908068   13010 out.go:309] Setting ErrFile to fd 2...
	I0601 04:08:44.908073   13010 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:08:44.908176   13010 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:08:44.908503   13010 out.go:303] Setting JSON to false
	I0601 04:08:44.923474   13010 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":4094,"bootTime":1654077630,"procs":349,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:08:44.923592   13010 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:08:44.961188   13010 out.go:177] * [old-k8s-version-20220601040844-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:08:45.019992   13010 notify.go:193] Checking for updates...
	I0601 04:08:45.058114   13010 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:08:45.115979   13010 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:08:45.175010   13010 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:08:45.234095   13010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:08:45.308801   13010 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:08:45.346879   13010 config.go:178] Loaded profile config "kubenet-20220601035306-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:08:45.346974   13010 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:08:45.446338   13010 docker.go:137] docker version: linux-20.10.14
	I0601 04:08:45.446480   13010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:08:45.573023   13010 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:08:45.525456619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:08:45.596606   13010 out.go:177] * Using the docker driver based on user configuration
	I0601 04:08:45.617911   13010 start.go:284] selected driver: docker
	I0601 04:08:45.617931   13010 start.go:806] validating driver "docker" against <nil>
	I0601 04:08:45.617959   13010 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:08:45.621465   13010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:08:45.753512   13010 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:08:45.700743664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:08:45.753629   13010 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 04:08:45.753828   13010 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 04:08:45.776041   13010 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 04:08:45.795538   13010 cni.go:95] Creating CNI manager for ""
	I0601 04:08:45.795558   13010 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:08:45.795574   13010 start_flags.go:306] config:
	{Name:old-k8s-version-20220601040844-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:08:45.832739   13010 out.go:177] * Starting control plane node old-k8s-version-20220601040844-2342 in cluster old-k8s-version-20220601040844-2342
	I0601 04:08:45.889690   13010 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:08:46.005938   13010 out.go:177] * Pulling base image ...
	I0601 04:08:46.064635   13010 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 04:08:46.064666   13010 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:08:46.064707   13010 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 04:08:46.064751   13010 cache.go:57] Caching tarball of preloaded images
	I0601 04:08:46.065192   13010 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 04:08:46.065308   13010 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 04:08:46.065609   13010 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/config.json ...
	I0601 04:08:46.065647   13010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/config.json: {Name:mkf57a4b00cc0f3b9b593472b63442dfed30b12f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:08:46.234360   13010 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:08:46.234398   13010 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:08:46.234449   13010 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:08:46.234597   13010 start.go:352] acquiring machines lock for old-k8s-version-20220601040844-2342: {Name:mkf87fe8c4a511c3ef565c4140ef4a74b527ad92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:08:46.234827   13010 start.go:356] acquired machines lock for "old-k8s-version-20220601040844-2342" in 211.768µs
	I0601 04:08:46.234866   13010 start.go:91] Provisioning new machine with config: &{Name:old-k8s-version-20220601040844-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:08:46.234993   13010 start.go:131] createHost starting for "" (driver="docker")
	I0601 04:08:46.293638   13010 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 04:08:46.293864   13010 start.go:165] libmachine.API.Create for "old-k8s-version-20220601040844-2342" (driver="docker")
	I0601 04:08:46.293895   13010 client.go:168] LocalClient.Create starting
	I0601 04:08:46.293964   13010 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 04:08:46.293998   13010 main.go:134] libmachine: Decoding PEM data...
	I0601 04:08:46.294013   13010 main.go:134] libmachine: Parsing certificate...
	I0601 04:08:46.294071   13010 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 04:08:46.294096   13010 main.go:134] libmachine: Decoding PEM data...
	I0601 04:08:46.294104   13010 main.go:134] libmachine: Parsing certificate...
	I0601 04:08:46.294516   13010 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601040844-2342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 04:08:46.398139   13010 cli_runner.go:211] docker network inspect old-k8s-version-20220601040844-2342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 04:08:46.398229   13010 network_create.go:272] running [docker network inspect old-k8s-version-20220601040844-2342] to gather additional debugging logs...
	I0601 04:08:46.398258   13010 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601040844-2342
	W0601 04:08:46.472788   13010 cli_runner.go:211] docker network inspect old-k8s-version-20220601040844-2342 returned with exit code 1
	I0601 04:08:46.472823   13010 network_create.go:275] error running [docker network inspect old-k8s-version-20220601040844-2342]: docker network inspect old-k8s-version-20220601040844-2342: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220601040844-2342
	I0601 04:08:46.472849   13010 network_create.go:277] output of [docker network inspect old-k8s-version-20220601040844-2342]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220601040844-2342
	
	** /stderr **
	I0601 04:08:46.472929   13010 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 04:08:46.544189   13010 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00046cfd0] misses:0}
	I0601 04:08:46.544243   13010 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 04:08:46.544271   13010 network_create.go:115] attempt to create docker network old-k8s-version-20220601040844-2342 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 04:08:46.544372   13010 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601040844-2342
	W0601 04:08:46.611417   13010 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601040844-2342 returned with exit code 1
	W0601 04:08:46.611483   13010 network_create.go:107] failed to create docker network old-k8s-version-20220601040844-2342 192.168.49.0/24, will retry: subnet is taken
	I0601 04:08:46.611892   13010 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00046cfd0] amended:false}} dirty:map[] misses:0}
	I0601 04:08:46.611926   13010 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 04:08:46.612217   13010 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00046cfd0] amended:true}} dirty:map[192.168.49.0:0xc00046cfd0 192.168.58.0:0xc0003b8508] misses:0}
	I0601 04:08:46.612233   13010 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 04:08:46.612241   13010 network_create.go:115] attempt to create docker network old-k8s-version-20220601040844-2342 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 04:08:46.612335   13010 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601040844-2342
	I0601 04:08:46.747881   13010 network_create.go:99] docker network old-k8s-version-20220601040844-2342 192.168.58.0/24 created
	I0601 04:08:46.747941   13010 kic.go:106] calculated static IP "192.168.58.2" for the "old-k8s-version-20220601040844-2342" container
	I0601 04:08:46.748069   13010 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 04:08:46.820614   13010 cli_runner.go:164] Run: docker volume create old-k8s-version-20220601040844-2342 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601040844-2342 --label created_by.minikube.sigs.k8s.io=true
	I0601 04:08:46.893191   13010 oci.go:103] Successfully created a docker volume old-k8s-version-20220601040844-2342
	I0601 04:08:46.893309   13010 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-20220601040844-2342-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220601040844-2342 --entrypoint /usr/bin/test -v old-k8s-version-20220601040844-2342:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 04:08:47.448981   13010 oci.go:107] Successfully prepared a docker volume old-k8s-version-20220601040844-2342
	I0601 04:08:47.449068   13010 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 04:08:47.449088   13010 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 04:08:47.449240   13010 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220601040844-2342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 04:08:51.520216   13010 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220601040844-2342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (4.070872505s)
	I0601 04:08:51.520236   13010 kic.go:188] duration metric: took 4.071105 seconds to extract preloaded images to volume
	I0601 04:08:51.520324   13010 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 04:08:51.646567   13010 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220601040844-2342 --name old-k8s-version-20220601040844-2342 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220601040844-2342 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220601040844-2342 --network old-k8s-version-20220601040844-2342 --ip 192.168.58.2 --volume old-k8s-version-20220601040844-2342:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 04:08:52.045532   13010 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601040844-2342 --format={{.State.Running}}
	I0601 04:08:52.124952   13010 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601040844-2342 --format={{.State.Status}}
	I0601 04:08:52.207821   13010 cli_runner.go:164] Run: docker exec old-k8s-version-20220601040844-2342 stat /var/lib/dpkg/alternatives/iptables
	I0601 04:08:52.351655   13010 oci.go:247] the created container "old-k8s-version-20220601040844-2342" has a running status.
	I0601 04:08:52.351680   13010 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa...
	I0601 04:08:52.419161   13010 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 04:08:52.540473   13010 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601040844-2342 --format={{.State.Status}}
	I0601 04:08:52.612507   13010 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 04:08:52.612524   13010 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220601040844-2342 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 04:08:52.741034   13010 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601040844-2342 --format={{.State.Status}}
	I0601 04:08:52.812186   13010 machine.go:88] provisioning docker machine ...
	I0601 04:08:52.812251   13010 ubuntu.go:169] provisioning hostname "old-k8s-version-20220601040844-2342"
	I0601 04:08:52.812363   13010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:08:52.885824   13010 main.go:134] libmachine: Using SSH client type: native
	I0601 04:08:52.886025   13010 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51586 <nil> <nil>}
	I0601 04:08:52.886039   13010 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220601040844-2342 && echo "old-k8s-version-20220601040844-2342" | sudo tee /etc/hostname
	I0601 04:08:53.013958   13010 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220601040844-2342
	
	I0601 04:08:53.014039   13010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:08:53.085837   13010 main.go:134] libmachine: Using SSH client type: native
	I0601 04:08:53.085995   13010 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51586 <nil> <nil>}
	I0601 04:08:53.086026   13010 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220601040844-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220601040844-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220601040844-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:08:53.201643   13010 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:08:53.201665   13010 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:08:53.201712   13010 ubuntu.go:177] setting up certificates
	I0601 04:08:53.201726   13010 provision.go:83] configureAuth start
	I0601 04:08:53.201793   13010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601040844-2342
	I0601 04:08:53.273218   13010 provision.go:138] copyHostCerts
	I0601 04:08:53.273294   13010 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:08:53.273303   13010 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:08:53.273403   13010 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:08:53.273607   13010 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:08:53.273618   13010 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:08:53.273678   13010 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:08:53.273841   13010 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:08:53.273849   13010 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:08:53.273911   13010 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:08:53.274034   13010 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220601040844-2342 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220601040844-2342]
	I0601 04:08:53.457994   13010 provision.go:172] copyRemoteCerts
	I0601 04:08:53.458051   13010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:08:53.458096   13010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:08:53.529624   13010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51586 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:08:53.617769   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:08:53.634880   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0601 04:08:53.651876   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 04:08:53.668482   13010 provision.go:86] duration metric: configureAuth took 466.738402ms
	I0601 04:08:53.668495   13010 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:08:53.668629   13010 config.go:178] Loaded profile config "old-k8s-version-20220601040844-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 04:08:53.668681   13010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:08:53.740986   13010 main.go:134] libmachine: Using SSH client type: native
	I0601 04:08:53.741194   13010 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51586 <nil> <nil>}
	I0601 04:08:53.741210   13010 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:08:53.858753   13010 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:08:53.858768   13010 ubuntu.go:71] root file system type: overlay
	I0601 04:08:53.858915   13010 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:08:53.859000   13010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:08:53.930872   13010 main.go:134] libmachine: Using SSH client type: native
	I0601 04:08:53.931034   13010 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51586 <nil> <nil>}
	I0601 04:08:53.931096   13010 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:08:54.057512   13010 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:08:54.057602   13010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:08:54.130956   13010 main.go:134] libmachine: Using SSH client type: native
	I0601 04:08:54.131092   13010 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 51586 <nil> <nil>}
	I0601 04:08:54.131106   13010 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:08:54.720887   13010 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 11:08:54.060942455 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0601 04:08:54.720913   13010 machine.go:91] provisioned docker machine in 1.908688293s
	I0601 04:08:54.720921   13010 client.go:171] LocalClient.Create took 8.426928735s
	I0601 04:08:54.720939   13010 start.go:173] duration metric: libmachine.API.Create for "old-k8s-version-20220601040844-2342" took 8.426980975s
	I0601 04:08:54.720945   13010 start.go:306] post-start starting for "old-k8s-version-20220601040844-2342" (driver="docker")
	I0601 04:08:54.720949   13010 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:08:54.721046   13010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:08:54.721124   13010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:08:54.794377   13010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51586 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:08:54.881203   13010 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:08:54.884698   13010 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:08:54.884716   13010 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:08:54.884723   13010 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:08:54.884728   13010 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:08:54.884736   13010 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:08:54.884840   13010 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:08:54.885000   13010 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:08:54.885145   13010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:08:54.892092   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:08:54.909495   13010 start.go:309] post-start completed in 188.540154ms
	I0601 04:08:54.935250   13010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601040844-2342
	I0601 04:08:55.007769   13010 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/config.json ...
	I0601 04:08:55.008189   13010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:08:55.008241   13010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:08:55.080200   13010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51586 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:08:55.164732   13010 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:08:55.169243   13010 start.go:134] duration metric: createHost completed in 8.934143719s
	I0601 04:08:55.169265   13010 start.go:81] releasing machines lock for "old-k8s-version-20220601040844-2342", held for 8.934323487s
	I0601 04:08:55.169344   13010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601040844-2342
	I0601 04:08:55.239974   13010 ssh_runner.go:195] Run: systemctl --version
	I0601 04:08:55.239974   13010 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:08:55.240036   13010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:08:55.240040   13010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:08:55.319409   13010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51586 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:08:55.320706   13010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51586 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:08:55.403933   13010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:08:55.539625   13010 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:08:55.549728   13010 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:08:55.549782   13010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:08:55.559077   13010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:08:55.572462   13010 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:08:55.640645   13010 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:08:55.712974   13010 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:08:55.722510   13010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:08:55.791264   13010 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:08:55.800846   13010 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:08:55.837379   13010 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:08:55.916899   13010 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0601 04:08:55.917067   13010 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220601040844-2342 dig +short host.docker.internal
	I0601 04:08:56.062832   13010 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:08:56.062926   13010 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:08:56.067229   13010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:08:56.077213   13010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:08:56.148783   13010 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 04:08:56.148845   13010 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:08:56.181401   13010 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 04:08:56.181416   13010 docker.go:541] Images already preloaded, skipping extraction
	I0601 04:08:56.181487   13010 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:08:56.212904   13010 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 04:08:56.212922   13010 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:08:56.212997   13010 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:08:56.285514   13010 cni.go:95] Creating CNI manager for ""
	I0601 04:08:56.285527   13010 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:08:56.285540   13010 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 04:08:56.285561   13010 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220601040844-2342 NodeName:old-k8s-version-20220601040844-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:08:56.285676   13010 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220601040844-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220601040844-2342
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:08:56.285757   13010 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220601040844-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 04:08:56.285813   13010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0601 04:08:56.293624   13010 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:08:56.293675   13010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:08:56.301150   13010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0601 04:08:56.313858   13010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:08:56.326399   13010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2146 bytes)
	I0601 04:08:56.339053   13010 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:08:56.344944   13010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:08:56.355457   13010 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342 for IP: 192.168.58.2
	I0601 04:08:56.355561   13010 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:08:56.355612   13010 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:08:56.355655   13010 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/client.key
	I0601 04:08:56.355664   13010 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/client.crt with IP's: []
	I0601 04:08:56.500519   13010 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/client.crt ...
	I0601 04:08:56.500537   13010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/client.crt: {Name:mk0cc30adddfa5c143dc1528adb59a488da55086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:08:56.500869   13010 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/client.key ...
	I0601 04:08:56.500879   13010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/client.key: {Name:mk3f63964b8f2e22538f45801b3d01802c1c96c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:08:56.501118   13010 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.key.cee25041
	I0601 04:08:56.501133   13010 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 04:08:56.626598   13010 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.crt.cee25041 ...
	I0601 04:08:56.626610   13010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.crt.cee25041: {Name:mk61dc7aa5bdca9d1b4a7c14af7d305bfec271a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:08:56.626855   13010 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.key.cee25041 ...
	I0601 04:08:56.626865   13010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.key.cee25041: {Name:mk80eb9289568b799f881a0a4de207f10a16f162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:08:56.627095   13010 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.crt
	I0601 04:08:56.627246   13010 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.key
	I0601 04:08:56.627415   13010 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.key
	I0601 04:08:56.627430   13010 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.crt with IP's: []
	I0601 04:08:56.726045   13010 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.crt ...
	I0601 04:08:56.726054   13010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.crt: {Name:mk3fa15b95b824005659f5b495d1c50cff098cb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:08:56.726311   13010 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.key ...
	I0601 04:08:56.726318   13010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.key: {Name:mk2ddf2d6ad0429321e0124f9acdd31cbedc513f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:08:56.726740   13010 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:08:56.726779   13010 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:08:56.726788   13010 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:08:56.726817   13010 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:08:56.726846   13010 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:08:56.726875   13010 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:08:56.726933   13010 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:08:56.727426   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:08:56.746346   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 04:08:56.766486   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:08:56.786440   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 04:08:56.814514   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:08:56.838067   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:08:56.862816   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:08:56.882262   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:08:56.901106   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:08:56.920222   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:08:56.937876   13010 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:08:56.955429   13010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:08:56.969100   13010 ssh_runner.go:195] Run: openssl version
	I0601 04:08:56.979396   13010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:08:56.988974   13010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:08:56.993925   13010 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:08:56.993983   13010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:08:56.999530   13010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:08:57.008449   13010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:08:57.021969   13010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:08:57.027460   13010 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:08:57.027504   13010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:08:57.033246   13010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:08:57.041396   13010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:08:57.049271   13010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:08:57.053251   13010 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:08:57.053326   13010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:08:57.059500   13010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:08:57.067529   13010 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220601040844-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false}
	I0601 04:08:57.067622   13010 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:08:57.096561   13010 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:08:57.104458   13010 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:08:57.111546   13010 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:08:57.111588   13010 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:08:57.118987   13010 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:08:57.119016   13010 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:08:57.879336   13010 out.go:204]   - Generating certificates and keys ...
	I0601 04:09:00.468187   13010 out.go:204]   - Booting up control plane ...
	W0601 04:10:55.374232   13010 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220601040844-2342 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220601040844-2342 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220601040844-2342 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220601040844-2342 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 04:10:55.374263   13010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:10:55.796813   13010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:10:55.806998   13010 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:10:55.807039   13010 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:10:55.815066   13010 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:10:55.815088   13010 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:10:56.539753   13010 out.go:204]   - Generating certificates and keys ...
	I0601 04:10:57.218363   13010 out.go:204]   - Booting up control plane ...
	I0601 04:12:52.134499   13010 kubeadm.go:397] StartCluster complete in 3m55.06438789s
	I0601 04:12:52.134591   13010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:12:52.166452   13010 logs.go:274] 0 containers: []
	W0601 04:12:52.166465   13010 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:12:52.166520   13010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:12:52.197045   13010 logs.go:274] 0 containers: []
	W0601 04:12:52.197056   13010 logs.go:276] No container was found matching "etcd"
	I0601 04:12:52.197111   13010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:12:52.228273   13010 logs.go:274] 0 containers: []
	W0601 04:12:52.228286   13010 logs.go:276] No container was found matching "coredns"
	I0601 04:12:52.228343   13010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:12:52.257875   13010 logs.go:274] 0 containers: []
	W0601 04:12:52.257887   13010 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:12:52.257940   13010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:12:52.286891   13010 logs.go:274] 0 containers: []
	W0601 04:12:52.286905   13010 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:12:52.286964   13010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:12:52.316970   13010 logs.go:274] 0 containers: []
	W0601 04:12:52.316982   13010 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:12:52.317038   13010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:12:52.346364   13010 logs.go:274] 0 containers: []
	W0601 04:12:52.346376   13010 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:12:52.346439   13010 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:12:52.375194   13010 logs.go:274] 0 containers: []
	W0601 04:12:52.375205   13010 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:12:52.375223   13010 logs.go:123] Gathering logs for dmesg ...
	I0601 04:12:52.375230   13010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:12:52.387207   13010 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:12:52.387221   13010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:12:52.440504   13010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:12:52.440514   13010 logs.go:123] Gathering logs for Docker ...
	I0601 04:12:52.440522   13010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:12:52.454274   13010 logs.go:123] Gathering logs for container status ...
	I0601 04:12:52.454286   13010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:12:54.507467   13010 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053145235s)
	I0601 04:12:54.507603   13010 logs.go:123] Gathering logs for kubelet ...
	I0601 04:12:54.507610   13010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0601 04:12:54.546264   13010 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 04:12:54.546282   13010 out.go:239] * 
	* 
	W0601 04:12:54.546405   13010 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 04:12:54.546420   13010 out.go:239] * 
	* 
	W0601 04:12:54.546941   13010 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 04:12:54.626744   13010 out.go:177] 
	W0601 04:12:54.648956   13010 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 04:12:54.649150   13010 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 04:12:54.649245   13010 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 04:12:54.691647   13010 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220601040844-2342 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601040844-2342
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601040844-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef",
	        "Created": "2022-06-01T11:08:51.714948054Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 194737,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:08:52.03954689Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hosts",
	        "LogPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef-json.log",
	        "Name": "/old-k8s-version-20220601040844-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601040844-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601040844-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601040844-2342",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601040844-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601040844-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ff35b527a7d795eaae377452f6f84a2ab59d3e8a63ed1d11c174002578be0c98",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51586"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51587"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51583"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51584"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51585"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ff35b527a7d7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601040844-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91a44163d235",
	                        "old-k8s-version-20220601040844-2342"
	                    ],
	                    "NetworkID": "19418e1daf902e10e91ecb0632ae46e6cbb8b43c0deeca829a591ae95b7f1e4b",
	                    "EndpointID": "1ede62e7d615ff8e30a6e79a7faeb0756e32780ae82ba1401755b22bde0262c1",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 6 (443.559579ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 04:12:55.293149   13450 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220601040844-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601040844-2342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (250.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220601040844-2342 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601040844-2342 create -f testdata/busybox.yaml: exit status 1 (29.823332ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220601040844-2342" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context old-k8s-version-20220601040844-2342 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601040844-2342
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601040844-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef",
	        "Created": "2022-06-01T11:08:51.714948054Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 194737,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:08:52.03954689Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hosts",
	        "LogPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef-json.log",
	        "Name": "/old-k8s-version-20220601040844-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601040844-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601040844-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601040844-2342",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601040844-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601040844-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ff35b527a7d795eaae377452f6f84a2ab59d3e8a63ed1d11c174002578be0c98",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51586"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51587"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51583"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51584"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51585"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ff35b527a7d7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601040844-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91a44163d235",
	                        "old-k8s-version-20220601040844-2342"
	                    ],
	                    "NetworkID": "19418e1daf902e10e91ecb0632ae46e6cbb8b43c0deeca829a591ae95b7f1e4b",
	                    "EndpointID": "1ede62e7d615ff8e30a6e79a7faeb0756e32780ae82ba1401755b22bde0262c1",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 6 (443.888735ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 04:12:55.838199   13463 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220601040844-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601040844-2342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601040844-2342
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601040844-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef",
	        "Created": "2022-06-01T11:08:51.714948054Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 194737,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:08:52.03954689Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hosts",
	        "LogPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef-json.log",
	        "Name": "/old-k8s-version-20220601040844-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601040844-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601040844-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601040844-2342",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601040844-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601040844-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ff35b527a7d795eaae377452f6f84a2ab59d3e8a63ed1d11c174002578be0c98",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51586"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51587"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51583"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51584"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51585"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ff35b527a7d7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601040844-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91a44163d235",
	                        "old-k8s-version-20220601040844-2342"
	                    ],
	                    "NetworkID": "19418e1daf902e10e91ecb0632ae46e6cbb8b43c0deeca829a591ae95b7f1e4b",
	                    "EndpointID": "1ede62e7d615ff8e30a6e79a7faeb0756e32780ae82ba1401755b22bde0262c1",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 6 (441.914352ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 04:12:56.355894   13475 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220601040844-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601040844-2342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220601040844-2342 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0601 04:12:56.867661    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:12:58.505299    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:13:01.987909    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:08.496423    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:13:08.919972    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:13:12.230096    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:30.397579    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:30.402823    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:30.413167    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:30.433581    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:30.475861    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:30.558049    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:30.718488    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:31.039453    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:31.681764    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:32.710625    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:32.962235    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:35.523175    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:40.644034    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:49.458960    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:13:50.886546    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:13:55.766457    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:14:00.581709    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:00.588126    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:00.598465    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:00.618668    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:00.660855    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:00.742206    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:00.902477    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:01.223537    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:01.863734    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:03.145482    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:03.702185    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
E0601 04:14:05.706077    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:10.827245    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:11.369081    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:13.671902    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:21.068050    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
start_stop_delete_test.go:207: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220601040844-2342 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.152346614s)

                                                
                                                
-- stdout --
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:209: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220601040844-2342 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220601040844-2342 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601040844-2342 describe deploy/metrics-server -n kube-system: exit status 1 (30.064448ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220601040844-2342" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220601040844-2342 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601040844-2342
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601040844-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef",
	        "Created": "2022-06-01T11:08:51.714948054Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 194737,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:08:52.03954689Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hosts",
	        "LogPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef-json.log",
	        "Name": "/old-k8s-version-20220601040844-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601040844-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601040844-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601040844-2342",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601040844-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601040844-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ff35b527a7d795eaae377452f6f84a2ab59d3e8a63ed1d11c174002578be0c98",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51586"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51587"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51583"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51584"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51585"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ff35b527a7d7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601040844-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91a44163d235",
	                        "old-k8s-version-20220601040844-2342"
	                    ],
	                    "NetworkID": "19418e1daf902e10e91ecb0632ae46e6cbb8b43c0deeca829a591ae95b7f1e4b",
	                    "EndpointID": "1ede62e7d615ff8e30a6e79a7faeb0756e32780ae82ba1401755b22bde0262c1",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 6 (445.870141ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 04:14:26.058773   13525 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220601040844-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601040844-2342" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (493.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220601040844-2342 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0601 04:14:30.841115    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:14:31.622548    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:41.549514    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:52.331798    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:14:59.362139    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:15:11.381393    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:15:14.656759    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:15:22.510218    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:15:35.593078    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:15:42.348974    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:15:49.645815    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220601040844-2342 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m8.649554103s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220601040844-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220601040844-2342 in cluster old-k8s-version-20220601040844-2342
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220601040844-2342" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 04:14:28.086015   13556 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:14:28.086165   13556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:14:28.086170   13556 out.go:309] Setting ErrFile to fd 2...
	I0601 04:14:28.086174   13556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:14:28.086295   13556 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:14:28.086578   13556 out.go:303] Setting JSON to false
	I0601 04:14:28.101590   13556 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":4438,"bootTime":1654077630,"procs":355,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:14:28.101682   13556 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:14:28.123877   13556 out.go:177] * [old-k8s-version-20220601040844-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:14:28.166654   13556 notify.go:193] Checking for updates...
	I0601 04:14:28.188297   13556 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:14:28.209461   13556 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:14:28.230448   13556 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:14:28.251496   13556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:14:28.272505   13556 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:14:28.294847   13556 config.go:178] Loaded profile config "old-k8s-version-20220601040844-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 04:14:28.317393   13556 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0601 04:14:28.338637   13556 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:14:28.412118   13556 docker.go:137] docker version: linux-20.10.14
	I0601 04:14:28.412264   13556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:14:28.539193   13556 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:14:28.479654171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:14:28.582897   13556 out.go:177] * Using the docker driver based on existing profile
	I0601 04:14:28.604731   13556 start.go:284] selected driver: docker
	I0601 04:14:28.604751   13556 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220601040844-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:14:28.604893   13556 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:14:28.607936   13556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:14:28.735652   13556 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:14:28.674188534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:14:28.735832   13556 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 04:14:28.735851   13556 cni.go:95] Creating CNI manager for ""
	I0601 04:14:28.735860   13556 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:14:28.735872   13556 start_flags.go:306] config:
	{Name:old-k8s-version-20220601040844-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:14:28.779373   13556 out.go:177] * Starting control plane node old-k8s-version-20220601040844-2342 in cluster old-k8s-version-20220601040844-2342
	I0601 04:14:28.800522   13556 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:14:28.821684   13556 out.go:177] * Pulling base image ...
	I0601 04:14:28.863807   13556 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 04:14:28.863829   13556 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:14:28.863901   13556 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 04:14:28.863913   13556 cache.go:57] Caching tarball of preloaded images
	I0601 04:14:28.864077   13556 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 04:14:28.864104   13556 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 04:14:28.864941   13556 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/config.json ...
	I0601 04:14:28.928843   13556 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:14:28.928860   13556 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:14:28.928872   13556 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:14:28.928926   13556 start.go:352] acquiring machines lock for old-k8s-version-20220601040844-2342: {Name:mkf87fe8c4a511c3ef565c4140ef4a74b527ad92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:14:28.929011   13556 start.go:356] acquired machines lock for "old-k8s-version-20220601040844-2342" in 58.74µs
	I0601 04:14:28.929029   13556 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:14:28.929038   13556 fix.go:55] fixHost starting: 
	I0601 04:14:28.929269   13556 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601040844-2342 --format={{.State.Status}}
	I0601 04:14:28.996525   13556 fix.go:103] recreateIfNeeded on old-k8s-version-20220601040844-2342: state=Stopped err=<nil>
	W0601 04:14:28.996561   13556 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:14:29.018678   13556 out.go:177] * Restarting existing docker container for "old-k8s-version-20220601040844-2342" ...
	I0601 04:14:29.040137   13556 cli_runner.go:164] Run: docker start old-k8s-version-20220601040844-2342
	I0601 04:14:29.396533   13556 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601040844-2342 --format={{.State.Status}}
	I0601 04:14:29.469773   13556 kic.go:416] container "old-k8s-version-20220601040844-2342" state is running.
	I0601 04:14:29.470677   13556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601040844-2342
	I0601 04:14:29.548417   13556 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/config.json ...
	I0601 04:14:29.548828   13556 machine.go:88] provisioning docker machine ...
	I0601 04:14:29.548849   13556 ubuntu.go:169] provisioning hostname "old-k8s-version-20220601040844-2342"
	I0601 04:14:29.548931   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:29.621945   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:29.622162   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:29.622174   13556 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220601040844-2342 && echo "old-k8s-version-20220601040844-2342" | sudo tee /etc/hostname
	I0601 04:14:29.747098   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220601040844-2342
	
	I0601 04:14:29.747180   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:29.820328   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:29.820477   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:29.820500   13556 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220601040844-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220601040844-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220601040844-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:14:29.940163   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:14:29.940186   13556 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:14:29.940211   13556 ubuntu.go:177] setting up certificates
	I0601 04:14:29.940220   13556 provision.go:83] configureAuth start
	I0601 04:14:29.940277   13556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601040844-2342
	I0601 04:14:30.010662   13556 provision.go:138] copyHostCerts
	I0601 04:14:30.010737   13556 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:14:30.010745   13556 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:14:30.010841   13556 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:14:30.011037   13556 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:14:30.011045   13556 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:14:30.011106   13556 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:14:30.011262   13556 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:14:30.011268   13556 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:14:30.011329   13556 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:14:30.011453   13556 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220601040844-2342 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220601040844-2342]
	I0601 04:14:30.286843   13556 provision.go:172] copyRemoteCerts
	I0601 04:14:30.286906   13556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:14:30.286990   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:30.358351   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:30.446260   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0601 04:14:30.462814   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 04:14:30.479664   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:14:30.496600   13556 provision.go:86] duration metric: configureAuth took 556.361696ms
	I0601 04:14:30.496613   13556 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:14:30.496772   13556 config.go:178] Loaded profile config "old-k8s-version-20220601040844-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 04:14:30.496832   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:30.590582   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:30.590744   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:30.590754   13556 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:14:30.708363   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:14:30.708375   13556 ubuntu.go:71] root file system type: overlay
	I0601 04:14:30.708495   13556 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:14:30.708557   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:30.779562   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:30.779734   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:30.779783   13556 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:14:30.905450   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:14:30.905568   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:30.975523   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:30.975670   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:30.975682   13556 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:14:31.100262   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:14:31.100277   13556 machine.go:91] provisioned docker machine in 1.551423404s
	I0601 04:14:31.100285   13556 start.go:306] post-start starting for "old-k8s-version-20220601040844-2342" (driver="docker")
	I0601 04:14:31.100304   13556 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:14:31.100385   13556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:14:31.100437   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:31.170558   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:31.256710   13556 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:14:31.260531   13556 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:14:31.260550   13556 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:14:31.260557   13556 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:14:31.260562   13556 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:14:31.260570   13556 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:14:31.260671   13556 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:14:31.260804   13556 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:14:31.260969   13556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:14:31.268042   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:14:31.284690   13556 start.go:309] post-start completed in 184.378635ms
	I0601 04:14:31.284756   13556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:14:31.284800   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:31.355208   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:31.441386   13556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:14:31.446321   13556 fix.go:57] fixHost completed within 2.517256464s
	I0601 04:14:31.446333   13556 start.go:81] releasing machines lock for "old-k8s-version-20220601040844-2342", held for 2.517286389s
	I0601 04:14:31.446396   13556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601040844-2342
	I0601 04:14:31.516485   13556 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:14:31.516500   13556 ssh_runner.go:195] Run: systemctl --version
	I0601 04:14:31.516551   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:31.516552   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:31.592361   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:31.594251   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:31.804333   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:14:31.815953   13556 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:14:31.825522   13556 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:14:31.825585   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:14:31.834978   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:14:31.847979   13556 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:14:31.913965   13556 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:14:31.999816   13556 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:14:32.009709   13556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:14:32.071375   13556 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:14:32.081029   13556 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:14:32.117180   13556 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:14:32.198594   13556 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0601 04:14:32.198786   13556 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220601040844-2342 dig +short host.docker.internal
	I0601 04:14:32.332443   13556 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:14:32.332544   13556 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:14:32.336875   13556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:14:32.346622   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:32.417145   13556 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 04:14:32.417220   13556 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:14:32.446920   13556 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 04:14:32.446935   13556 docker.go:541] Images already preloaded, skipping extraction
	I0601 04:14:32.446997   13556 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:14:32.477668   13556 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 04:14:32.477689   13556 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:14:32.477781   13556 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:14:32.550798   13556 cni.go:95] Creating CNI manager for ""
	I0601 04:14:32.550810   13556 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:14:32.550825   13556 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 04:14:32.550841   13556 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220601040844-2342 NodeName:old-k8s-version-20220601040844-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:14:32.550955   13556 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220601040844-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220601040844-2342
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:14:32.551029   13556 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220601040844-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 04:14:32.551089   13556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0601 04:14:32.558618   13556 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:14:32.558675   13556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:14:32.565664   13556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0601 04:14:32.578127   13556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:14:32.591071   13556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2146 bytes)
	I0601 04:14:32.603679   13556 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:14:32.607411   13556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:14:32.616789   13556 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342 for IP: 192.168.58.2
	I0601 04:14:32.616910   13556 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:14:32.616965   13556 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:14:32.617049   13556 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/client.key
	I0601 04:14:32.617110   13556 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.key.cee25041
	I0601 04:14:32.617164   13556 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.key
	I0601 04:14:32.617380   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:14:32.617426   13556 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:14:32.617438   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:14:32.617470   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:14:32.617545   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:14:32.617575   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:14:32.617669   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:14:32.618227   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:14:32.635461   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 04:14:32.652286   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:14:32.671018   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 04:14:32.688359   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:14:32.705117   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:14:32.724039   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:14:32.740670   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:14:32.759632   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:14:32.776280   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:14:32.793455   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:14:32.810265   13556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:14:32.823671   13556 ssh_runner.go:195] Run: openssl version
	I0601 04:14:32.829634   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:14:32.838396   13556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:14:32.842798   13556 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:14:32.842856   13556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:14:32.847925   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:14:32.855315   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:14:32.862997   13556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:14:32.866628   13556 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:14:32.866669   13556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:14:32.871768   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:14:32.878782   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:14:32.886516   13556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:14:32.890228   13556 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:14:32.890268   13556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:14:32.895408   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:14:32.904071   13556 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220601040844-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:14:32.904180   13556 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:14:32.940041   13556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:14:32.947460   13556 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:14:32.947477   13556 kubeadm.go:626] restartCluster start
	I0601 04:14:32.947520   13556 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:14:32.954241   13556 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:32.954322   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:33.025948   13556 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220601040844-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:14:33.026113   13556 kubeconfig.go:127] "old-k8s-version-20220601040844-2342" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 04:14:33.027094   13556 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:14:33.028479   13556 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:14:33.036254   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.036295   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.044520   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:33.246687   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.246868   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.257788   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:33.444816   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.444913   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.455791   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:33.644674   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.644862   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.655944   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:33.846690   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.846895   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.857517   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.044618   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.044715   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.054967   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.245414   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.245523   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.254445   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.445454   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.445514   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.454327   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.644792   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.644963   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.655473   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.846688   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.846841   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.858268   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.044728   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.044849   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.054648   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.246768   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.246904   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.258518   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.445824   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.445917   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.459006   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.644848   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.644981   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.655077   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.846003   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.846189   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.856593   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:36.046650   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:36.046821   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:36.056452   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:36.056461   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:36.056500   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:36.064526   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:36.064537   13556 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 04:14:36.064545   13556 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:14:36.064600   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:14:36.094502   13556 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:14:36.105031   13556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:14:36.112474   13556 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jun  1 11:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Jun  1 11:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5923 Jun  1 11:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5727 Jun  1 11:10 /etc/kubernetes/scheduler.conf
	
	I0601 04:14:36.112530   13556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 04:14:36.119709   13556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 04:14:36.127589   13556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 04:14:36.135123   13556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 04:14:36.142623   13556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:14:36.149999   13556 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:14:36.150008   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:36.200699   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:37.148880   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:37.358149   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:37.419637   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:37.470094   13556 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:14:37.470154   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:37.978831   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:38.480959   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:38.978757   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:39.478959   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:39.979253   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:40.478829   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:40.978939   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:41.479002   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:41.978797   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:42.478992   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:42.978850   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:43.478836   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:43.978895   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:44.479304   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:44.978970   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:45.480933   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:45.978812   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:46.478839   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:46.978909   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:47.480857   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:47.979175   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:48.481101   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:48.980947   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:49.478904   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:49.978978   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:50.478963   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:50.979003   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:51.481089   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:51.979642   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:52.478906   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:52.980420   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:53.478907   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:53.978960   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:54.481043   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:54.979768   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:55.478958   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:55.979398   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:56.481048   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:56.979050   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:57.479407   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:57.979337   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:58.478988   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:58.981010   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:59.479766   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:59.979321   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:00.479311   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:00.980933   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:01.478999   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:01.979261   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:02.479982   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:02.979180   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:03.480051   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:03.980590   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:04.479052   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:04.979458   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:05.481240   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:05.979974   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:06.479186   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:06.979066   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:07.479325   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:07.981279   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:08.479532   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:08.979591   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:09.479222   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:09.979845   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:10.479574   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:10.979559   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:11.479793   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:11.979666   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:12.481279   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:12.981040   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:13.479755   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:13.979822   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:14.480950   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:14.979150   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:15.481354   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:15.980964   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:16.479268   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:16.980881   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:17.479254   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:17.979479   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:18.479959   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:18.980556   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:19.479459   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:19.980773   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:20.479361   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:20.979442   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:21.481299   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:21.979515   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:22.479254   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:22.979294   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:23.480422   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:23.979385   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:24.479212   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:24.979328   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:25.479798   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:25.979268   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:26.480583   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:26.980525   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:27.479476   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:27.979316   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:28.479579   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:28.979569   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:29.479329   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:29.979458   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:30.479302   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:30.981410   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:31.479357   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:31.979445   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:32.481142   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:32.979961   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:33.479768   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:33.981202   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:34.479961   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:34.981313   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:35.480043   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:35.979601   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:36.481401   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:36.981605   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:37.479451   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:37.508591   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.508605   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:37.508660   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:37.537446   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.537458   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:37.537516   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:37.567876   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.567889   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:37.567948   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:37.598482   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.598495   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:37.598558   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:37.628232   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.628246   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:37.628311   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:37.657793   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.657804   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:37.657857   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:37.686640   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.686653   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:37.686709   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:37.715419   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.715431   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:37.715444   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:37.715451   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:15:39.769146   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053659443s)
	I0601 04:15:39.769292   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:39.769300   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:39.807272   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:39.807284   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:39.819291   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:39.819303   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:39.871102   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:39.871120   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:39.871129   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:42.383986   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:42.479616   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:42.510337   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.510350   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:42.510410   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:42.539205   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.539218   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:42.539278   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:42.568639   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.568652   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:42.568706   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:42.599882   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.599895   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:42.599958   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:42.635852   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.635869   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:42.635931   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:42.667445   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.667458   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:42.667520   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:42.698074   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.698087   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:42.698144   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:42.728427   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.728443   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:42.728450   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:42.728456   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:42.767219   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:42.767231   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:42.778821   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:42.778833   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:42.831064   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:42.831076   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:42.831082   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:42.843486   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:42.843502   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:15:44.896657   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053119768s)
	I0601 04:15:47.398865   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:47.481492   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:47.529068   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.529088   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:47.529149   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:47.586875   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.586904   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:47.586983   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:47.638013   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.638050   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:47.638123   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:47.689527   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.689546   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:47.689618   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:47.725472   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.725488   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:47.725560   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:47.765312   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.765326   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:47.765394   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:47.796175   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.796187   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:47.796245   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:47.829157   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.829171   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:47.829180   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:47.829188   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:47.875377   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:47.875396   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:47.888770   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:47.888784   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:47.976340   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:47.976363   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:47.976372   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:47.991532   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:47.991545   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:15:50.056745   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065165785s)
	I0601 04:15:52.557096   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:52.979736   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:53.015032   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.015050   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:53.015130   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:53.052874   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.052890   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:53.052980   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:53.090000   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.109400   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:53.109482   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:53.143853   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.143871   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:53.143936   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:53.176667   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.176682   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:53.176750   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:53.209287   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.209304   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:53.209363   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:53.249867   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.249882   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:53.249952   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:53.291302   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.291317   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:53.291324   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:53.291331   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:53.347312   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:53.347331   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:53.364045   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:53.364061   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:53.437580   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:53.437590   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:53.437599   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:53.452321   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:53.452356   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:15:55.517741   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065349291s)
	I0601 04:15:58.018119   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:58.479675   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:58.510300   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.510315   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:58.510379   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:58.539824   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.539837   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:58.539903   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:58.574431   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.574444   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:58.574506   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:58.608048   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.608062   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:58.608126   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:58.643132   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.643149   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:58.643270   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:58.684314   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.684331   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:58.684411   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:58.729479   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.729493   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:58.729562   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:58.763728   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.763744   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:58.763752   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:58.763760   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:58.810477   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:58.810505   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:58.831095   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:58.831117   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:58.902361   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:58.902375   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:58.902384   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:58.918761   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:58.918777   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:00.984641   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065827748s)
	I0601 04:16:03.485999   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:03.979913   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:04.010953   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.010966   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:04.011018   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:04.039669   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.039684   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:04.039747   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:04.070923   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.070936   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:04.070991   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:04.100811   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.100824   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:04.100880   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:04.131464   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.131476   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:04.131531   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:04.165158   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.165170   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:04.165224   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:04.194459   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.194472   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:04.194528   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:04.223766   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.223779   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:04.223786   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:04.223793   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:04.264008   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:04.264021   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:04.275889   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:04.275901   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:04.333158   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:04.333175   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:04.333191   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:04.347399   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:04.347412   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:06.399938   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052491924s)
	I0601 04:16:08.902254   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:08.980476   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:09.010579   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.010592   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:09.010645   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:09.038706   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.038718   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:09.038772   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:09.067068   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.067080   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:09.067135   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:09.097407   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.097419   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:09.097475   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:09.127332   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.127344   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:09.127402   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:09.157941   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.157958   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:09.158048   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:09.190368   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.190380   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:09.190435   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:09.223448   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.223461   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:09.223467   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:09.223474   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:09.265193   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:09.265207   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:09.277605   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:09.277624   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:09.331638   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:09.331655   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:09.331663   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:09.345526   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:09.345539   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:11.401324   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055748952s)
	I0601 04:16:13.902794   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:13.981915   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:14.012909   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.012922   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:14.012976   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:14.043088   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.043100   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:14.043156   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:14.073109   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.073121   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:14.073177   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:14.102553   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.102567   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:14.102621   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:14.132315   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.132329   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:14.132376   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:14.161620   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.161633   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:14.161691   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:14.190400   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.190413   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:14.190472   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:14.220208   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.220221   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:14.220228   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:14.220238   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:14.260342   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:14.260355   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:14.273591   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:14.273605   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:14.325967   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:14.325979   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:14.325986   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:14.338048   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:14.338059   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:16.397631   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059537775s)
	I0601 04:16:18.898002   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:18.980224   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:19.011707   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.011721   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:19.011789   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:19.041107   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.041118   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:19.041173   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:19.069931   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.069945   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:19.070004   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:19.099021   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.099032   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:19.099088   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:19.127973   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.127994   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:19.128051   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:19.156955   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.156968   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:19.157023   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:19.186132   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.186144   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:19.186203   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:19.215364   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.215375   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:19.215382   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:19.215390   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:19.227400   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:19.227412   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:21.281212   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053766355s)
	I0601 04:16:21.281318   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:21.281326   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:21.320693   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:21.320705   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:21.332980   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:21.332992   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:21.385783   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:23.888184   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:23.981315   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:24.012581   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.012595   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:24.012650   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:24.042236   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.042248   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:24.042307   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:24.070098   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.070111   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:24.070163   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:24.098624   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.098637   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:24.098696   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:24.127561   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.127574   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:24.127630   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:24.157059   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.157071   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:24.157129   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:24.187116   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.187135   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:24.187211   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:24.216004   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.216017   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:24.216024   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:24.216030   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:24.255821   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:24.255835   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:24.267821   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:24.267832   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:24.319990   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:24.320002   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:24.320010   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:24.331836   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:24.331847   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:26.392627   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060744494s)
	I0601 04:16:28.895005   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:28.981573   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:29.012096   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.012109   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:29.012164   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:29.040693   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.040707   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:29.040760   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:29.070396   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.070409   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:29.070478   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:29.100948   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.100961   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:29.101017   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:29.130251   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.130263   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:29.130318   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:29.158697   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.158709   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:29.158764   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:29.187980   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.187993   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:29.188049   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:29.216948   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.216959   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:29.216970   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:29.216977   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:29.256025   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:29.256038   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:29.267334   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:29.267346   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:29.319728   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:29.319745   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:29.319752   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:29.331962   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:29.331973   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:31.389033   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057024918s)
	I0601 04:16:33.889268   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:33.980563   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:34.010516   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.010529   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:34.010584   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:34.039957   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.039968   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:34.040022   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:34.069056   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.069070   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:34.069126   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:34.099006   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.099022   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:34.099080   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:34.128051   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.128065   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:34.128123   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:34.157852   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.157865   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:34.157922   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:34.187417   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.187429   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:34.187484   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:34.217119   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.217131   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:34.217138   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:34.217146   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:34.269395   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:34.269405   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:34.269413   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:34.280972   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:34.280984   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:36.337032   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056013856s)
	I0601 04:16:36.337139   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:36.337145   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:36.376237   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:36.376250   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:38.890370   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:38.982134   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:39.013111   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.013124   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:39.013178   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:39.042635   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.042649   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:39.042702   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:39.072345   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.072358   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:39.072420   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:39.101587   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.101601   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:39.101655   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:39.130972   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.130985   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:39.131049   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:39.160564   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.160577   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:39.160630   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:39.190701   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.190714   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:39.190766   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:39.219934   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.219947   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:39.219954   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:39.219961   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:39.231641   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:39.231652   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:39.283515   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:39.283528   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:39.283536   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:39.295882   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:39.295893   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:41.351066   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055135322s)
	I0601 04:16:41.351176   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:41.351183   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:43.892267   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:43.980540   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:44.012230   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.012242   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:44.012300   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:44.042000   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.042012   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:44.042066   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:44.070514   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.070527   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:44.070580   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:44.098378   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.098391   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:44.098453   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:44.128346   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.128359   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:44.128418   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:44.160355   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.160369   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:44.160421   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:44.189319   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.189331   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:44.189396   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:44.217737   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.217749   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:44.217756   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:44.217763   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:44.257762   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:44.257775   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:44.269620   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:44.269632   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:44.322533   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:44.322543   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:44.322550   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:44.334650   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:44.334662   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:46.388281   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053585675s)
	I0601 04:16:48.889019   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:48.981775   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:49.013056   13556 logs.go:274] 0 containers: []
	W0601 04:16:49.013068   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:49.013124   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:49.044207   13556 logs.go:274] 0 containers: []
	W0601 04:16:49.044220   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:49.044276   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:49.073500   13556 logs.go:274] 0 containers: []
	W0601 04:16:49.073512   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:49.073567   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:49.103541   13556 logs.go:274] 0 containers: []
	W0601 04:16:49.103552   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:49.103614   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:49.132668   13556 logs.go:274] 0 containers: []
	W0601 04:16:49.132681   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:49.132744   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:49.163688   13556 logs.go:274] 0 containers: []
	W0601 04:16:49.163701   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:49.163759   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:49.195957   13556 logs.go:274] 0 containers: []
	W0601 04:16:49.195971   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:49.196032   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:49.225925   13556 logs.go:274] 0 containers: []
	W0601 04:16:49.225937   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:49.225944   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:49.225950   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:49.266081   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:49.266096   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:49.278280   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:49.278295   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:49.334931   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:49.334957   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:49.334974   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:49.347656   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:49.347669   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:51.400813   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053108341s)
	I0601 04:16:53.901201   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:53.980173   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:54.009351   13556 logs.go:274] 0 containers: []
	W0601 04:16:54.009363   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:54.009418   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:54.037618   13556 logs.go:274] 0 containers: []
	W0601 04:16:54.037630   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:54.037687   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:54.067162   13556 logs.go:274] 0 containers: []
	W0601 04:16:54.067175   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:54.067229   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:54.099126   13556 logs.go:274] 0 containers: []
	W0601 04:16:54.099139   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:54.099194   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:54.128232   13556 logs.go:274] 0 containers: []
	W0601 04:16:54.128245   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:54.128301   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:54.156975   13556 logs.go:274] 0 containers: []
	W0601 04:16:54.156987   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:54.157068   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:54.190271   13556 logs.go:274] 0 containers: []
	W0601 04:16:54.190284   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:54.190344   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:54.221519   13556 logs.go:274] 0 containers: []
	W0601 04:16:54.221533   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:54.221540   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:54.221549   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:54.262267   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:54.262279   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:54.274144   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:54.274158   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:54.338558   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:54.338570   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:54.338577   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:54.351173   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:54.351187   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:56.406195   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054966626s)
	I0601 04:16:58.906663   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:58.980248   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:59.011173   13556 logs.go:274] 0 containers: []
	W0601 04:16:59.011186   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:59.011246   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:59.043851   13556 logs.go:274] 0 containers: []
	W0601 04:16:59.043862   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:59.043911   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:59.075193   13556 logs.go:274] 0 containers: []
	W0601 04:16:59.075205   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:59.075248   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:59.107762   13556 logs.go:274] 0 containers: []
	W0601 04:16:59.107774   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:59.107828   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:59.137311   13556 logs.go:274] 0 containers: []
	W0601 04:16:59.137323   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:59.137378   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:59.167805   13556 logs.go:274] 0 containers: []
	W0601 04:16:59.167818   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:59.167877   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:59.197673   13556 logs.go:274] 0 containers: []
	W0601 04:16:59.197686   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:59.197742   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:59.228472   13556 logs.go:274] 0 containers: []
	W0601 04:16:59.228485   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:59.228492   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:59.228500   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:59.271554   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:59.271569   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:59.285123   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:59.285138   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:59.345944   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:59.345957   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:59.345964   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:59.359937   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:59.359951   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:17:01.415662   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055674854s)
	I0601 04:17:03.916174   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:17:03.981342   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:17:04.011136   13556 logs.go:274] 0 containers: []
	W0601 04:17:04.011150   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:17:04.011205   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:17:04.040097   13556 logs.go:274] 0 containers: []
	W0601 04:17:04.040109   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:17:04.040163   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:17:04.073327   13556 logs.go:274] 0 containers: []
	W0601 04:17:04.073341   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:17:04.073396   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:17:04.104498   13556 logs.go:274] 0 containers: []
	W0601 04:17:04.104510   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:17:04.104562   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:17:04.137629   13556 logs.go:274] 0 containers: []
	W0601 04:17:04.137641   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:17:04.137694   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:17:04.172431   13556 logs.go:274] 0 containers: []
	W0601 04:17:04.172445   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:17:04.172503   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:17:04.203562   13556 logs.go:274] 0 containers: []
	W0601 04:17:04.203574   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:17:04.203634   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:17:04.232956   13556 logs.go:274] 0 containers: []
	W0601 04:17:04.232968   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:17:04.232975   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:17:04.232982   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:17:04.273645   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:17:04.273662   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:17:04.288120   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:17:04.288136   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:17:04.347434   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:17:04.347447   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:17:04.347457   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:17:04.359313   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:17:04.359326   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:17:06.416200   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056839571s)
	I0601 04:17:08.916531   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:17:08.980611   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:17:09.068219   13556 logs.go:274] 0 containers: []
	W0601 04:17:09.068239   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:17:09.068317   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:17:09.125467   13556 logs.go:274] 0 containers: []
	W0601 04:17:09.125482   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:17:09.125546   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:17:09.205169   13556 logs.go:274] 0 containers: []
	W0601 04:17:09.205200   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:17:09.205306   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:17:09.254602   13556 logs.go:274] 0 containers: []
	W0601 04:17:09.254617   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:17:09.254684   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:17:09.310365   13556 logs.go:274] 0 containers: []
	W0601 04:17:09.310377   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:17:09.310450   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:17:09.373761   13556 logs.go:274] 0 containers: []
	W0601 04:17:09.373775   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:17:09.373836   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:17:09.435215   13556 logs.go:274] 0 containers: []
	W0601 04:17:09.435230   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:17:09.435298   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:17:09.495171   13556 logs.go:274] 0 containers: []
	W0601 04:17:09.495185   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:17:09.495195   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:17:09.495205   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:17:09.526892   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:17:09.526909   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:17:11.606545   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.079598027s)
	I0601 04:17:11.606664   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:17:11.606672   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:17:11.651143   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:17:11.651163   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:17:11.665570   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:17:11.665587   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:17:11.728003   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:17:14.228701   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:17:14.480501   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:17:14.515058   13556 logs.go:274] 0 containers: []
	W0601 04:17:14.515086   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:17:14.515160   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:17:14.547076   13556 logs.go:274] 0 containers: []
	W0601 04:17:14.547090   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:17:14.547154   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:17:14.586893   13556 logs.go:274] 0 containers: []
	W0601 04:17:14.586906   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:17:14.586965   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:17:14.624341   13556 logs.go:274] 0 containers: []
	W0601 04:17:14.624380   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:17:14.624450   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:17:14.660387   13556 logs.go:274] 0 containers: []
	W0601 04:17:14.660402   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:17:14.660459   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:17:14.697286   13556 logs.go:274] 0 containers: []
	W0601 04:17:14.697301   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:17:14.697362   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:17:14.734293   13556 logs.go:274] 0 containers: []
	W0601 04:17:14.734305   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:17:14.734358   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:17:14.765756   13556 logs.go:274] 0 containers: []
	W0601 04:17:14.765773   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:17:14.765782   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:17:14.765793   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:17:14.807410   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:17:14.807433   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:17:14.822174   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:17:14.822190   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:17:14.890758   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:17:14.890769   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:17:14.890776   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:17:14.904605   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:17:14.904619   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:17:16.970711   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.066057447s)
	I0601 04:17:19.470999   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:17:19.980547   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:17:20.021487   13556 logs.go:274] 0 containers: []
	W0601 04:17:20.021505   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:17:20.021561   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:17:20.068534   13556 logs.go:274] 0 containers: []
	W0601 04:17:20.068558   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:17:20.068650   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:17:20.123302   13556 logs.go:274] 0 containers: []
	W0601 04:17:20.123317   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:17:20.123376   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:17:20.185187   13556 logs.go:274] 0 containers: []
	W0601 04:17:20.185202   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:17:20.185271   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:17:20.242881   13556 logs.go:274] 0 containers: []
	W0601 04:17:20.242893   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:17:20.242964   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:17:20.312416   13556 logs.go:274] 0 containers: []
	W0601 04:17:20.312428   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:17:20.312489   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:17:20.355804   13556 logs.go:274] 0 containers: []
	W0601 04:17:20.355816   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:17:20.355885   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:17:20.414825   13556 logs.go:274] 0 containers: []
	W0601 04:17:20.414839   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:17:20.414848   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:17:20.414855   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:17:20.503656   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:17:20.503667   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:17:20.503676   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:17:20.522058   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:17:20.522074   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:17:22.588831   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.066724164s)
	I0601 04:17:22.588980   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:17:22.588987   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:17:22.635520   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:17:22.635538   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:17:25.151118   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:17:25.480789   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:17:25.512476   13556 logs.go:274] 0 containers: []
	W0601 04:17:25.512492   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:17:25.512568   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:17:25.544793   13556 logs.go:274] 0 containers: []
	W0601 04:17:25.544807   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:17:25.544866   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:17:25.576080   13556 logs.go:274] 0 containers: []
	W0601 04:17:25.576099   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:17:25.576178   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:17:25.611679   13556 logs.go:274] 0 containers: []
	W0601 04:17:25.611692   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:17:25.611755   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:17:25.646253   13556 logs.go:274] 0 containers: []
	W0601 04:17:25.646266   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:17:25.646332   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:17:25.677582   13556 logs.go:274] 0 containers: []
	W0601 04:17:25.677595   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:17:25.677652   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:17:25.710253   13556 logs.go:274] 0 containers: []
	W0601 04:17:25.710267   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:17:25.710387   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:17:25.741238   13556 logs.go:274] 0 containers: []
	W0601 04:17:25.741252   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:17:25.741261   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:17:25.741268   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:17:25.786192   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:17:25.786206   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:17:25.799878   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:17:25.799892   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:17:25.857664   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:17:25.857675   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:17:25.857682   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:17:25.871651   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:17:25.871664   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:17:27.943279   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.071577035s)
	I0601 04:17:30.444289   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:17:30.481490   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:17:30.510430   13556 logs.go:274] 0 containers: []
	W0601 04:17:30.510442   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:17:30.510497   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:17:30.539779   13556 logs.go:274] 0 containers: []
	W0601 04:17:30.539792   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:17:30.539849   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:17:30.571060   13556 logs.go:274] 0 containers: []
	W0601 04:17:30.571073   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:17:30.571128   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:17:30.602940   13556 logs.go:274] 0 containers: []
	W0601 04:17:30.602955   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:17:30.603016   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:17:30.635687   13556 logs.go:274] 0 containers: []
	W0601 04:17:30.635706   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:17:30.635782   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:17:30.667366   13556 logs.go:274] 0 containers: []
	W0601 04:17:30.667378   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:17:30.667434   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:17:30.697372   13556 logs.go:274] 0 containers: []
	W0601 04:17:30.697385   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:17:30.697443   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:17:30.727457   13556 logs.go:274] 0 containers: []
	W0601 04:17:30.727470   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:17:30.727477   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:17:30.727484   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:17:30.740679   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:17:30.740693   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:17:30.799512   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:17:30.799523   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:17:30.799530   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:17:30.812669   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:17:30.812683   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:17:32.869752   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057033366s)
	I0601 04:17:32.869863   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:17:32.869870   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:17:35.410888   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:17:35.480943   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:17:35.511002   13556 logs.go:274] 0 containers: []
	W0601 04:17:35.511014   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:17:35.511070   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:17:35.539659   13556 logs.go:274] 0 containers: []
	W0601 04:17:35.539671   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:17:35.539725   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:17:35.569216   13556 logs.go:274] 0 containers: []
	W0601 04:17:35.569229   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:17:35.569283   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:17:35.596897   13556 logs.go:274] 0 containers: []
	W0601 04:17:35.596911   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:17:35.596968   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:17:35.626932   13556 logs.go:274] 0 containers: []
	W0601 04:17:35.626945   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:17:35.626997   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:17:35.656052   13556 logs.go:274] 0 containers: []
	W0601 04:17:35.656065   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:17:35.656121   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:17:35.686115   13556 logs.go:274] 0 containers: []
	W0601 04:17:35.686128   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:17:35.686182   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:17:35.715846   13556 logs.go:274] 0 containers: []
	W0601 04:17:35.715858   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:17:35.715866   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:17:35.715872   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:17:35.757524   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:17:35.757537   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:17:35.770342   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:17:35.770354   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:17:35.824255   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:17:35.824268   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:17:35.824275   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:17:35.836410   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:17:35.836422   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:17:37.893351   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056894302s)
	I0601 04:17:40.394399   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:17:40.482830   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:17:40.513804   13556 logs.go:274] 0 containers: []
	W0601 04:17:40.513816   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:17:40.513870   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:17:40.543807   13556 logs.go:274] 0 containers: []
	W0601 04:17:40.543820   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:17:40.543880   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:17:40.575870   13556 logs.go:274] 0 containers: []
	W0601 04:17:40.575886   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:17:40.575950   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:17:40.606360   13556 logs.go:274] 0 containers: []
	W0601 04:17:40.606373   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:17:40.606428   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:17:40.635847   13556 logs.go:274] 0 containers: []
	W0601 04:17:40.635864   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:17:40.635939   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:17:40.666820   13556 logs.go:274] 0 containers: []
	W0601 04:17:40.666838   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:17:40.666893   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:17:40.695385   13556 logs.go:274] 0 containers: []
	W0601 04:17:40.695398   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:17:40.695455   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:17:40.723845   13556 logs.go:274] 0 containers: []
	W0601 04:17:40.723857   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:17:40.723865   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:17:40.723873   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:17:40.763912   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:17:40.763926   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:17:40.776466   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:17:40.776479   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:17:40.829153   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:17:40.829164   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:17:40.829171   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:17:40.841169   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:17:40.841180   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:17:42.905157   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.063942562s)
	I0601 04:17:45.405808   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:17:45.480874   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:17:45.512723   13556 logs.go:274] 0 containers: []
	W0601 04:17:45.512738   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:17:45.512794   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:17:45.542686   13556 logs.go:274] 0 containers: []
	W0601 04:17:45.542700   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:17:45.542760   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:17:45.572225   13556 logs.go:274] 0 containers: []
	W0601 04:17:45.572248   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:17:45.572316   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:17:45.603125   13556 logs.go:274] 0 containers: []
	W0601 04:17:45.603137   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:17:45.603204   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:17:45.633676   13556 logs.go:274] 0 containers: []
	W0601 04:17:45.633688   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:17:45.633743   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:17:45.663786   13556 logs.go:274] 0 containers: []
	W0601 04:17:45.663798   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:17:45.663858   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:17:45.693890   13556 logs.go:274] 0 containers: []
	W0601 04:17:45.693902   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:17:45.693956   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:17:45.723867   13556 logs.go:274] 0 containers: []
	W0601 04:17:45.723879   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:17:45.723887   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:17:45.723897   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:17:47.785098   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061166291s)
	I0601 04:17:47.785206   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:17:47.785213   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:17:47.825892   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:17:47.825929   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:17:47.843271   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:17:47.843287   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:17:47.908180   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:17:47.908199   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:17:47.908206   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:17:50.423549   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:17:50.482196   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:17:50.512980   13556 logs.go:274] 0 containers: []
	W0601 04:17:50.512992   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:17:50.513047   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:17:50.542671   13556 logs.go:274] 0 containers: []
	W0601 04:17:50.542683   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:17:50.542739   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:17:50.570751   13556 logs.go:274] 0 containers: []
	W0601 04:17:50.570764   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:17:50.570835   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:17:50.598996   13556 logs.go:274] 0 containers: []
	W0601 04:17:50.599010   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:17:50.599064   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:17:50.626962   13556 logs.go:274] 0 containers: []
	W0601 04:17:50.626974   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:17:50.627029   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:17:50.658387   13556 logs.go:274] 0 containers: []
	W0601 04:17:50.658399   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:17:50.658457   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:17:50.687475   13556 logs.go:274] 0 containers: []
	W0601 04:17:50.687488   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:17:50.687545   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:17:50.715971   13556 logs.go:274] 0 containers: []
	W0601 04:17:50.715985   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:17:50.715993   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:17:50.716000   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:17:52.769163   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05312911s)
	I0601 04:17:52.769282   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:17:52.769290   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:17:52.809524   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:17:52.809540   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:17:52.821873   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:17:52.821920   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:17:52.874695   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:17:52.874706   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:17:52.874714   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:17:55.389468   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:17:55.481192   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:17:55.509699   13556 logs.go:274] 0 containers: []
	W0601 04:17:55.509711   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:17:55.509768   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:17:55.539278   13556 logs.go:274] 0 containers: []
	W0601 04:17:55.539292   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:17:55.539350   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:17:55.569436   13556 logs.go:274] 0 containers: []
	W0601 04:17:55.569449   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:17:55.569513   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:17:55.600968   13556 logs.go:274] 0 containers: []
	W0601 04:17:55.600979   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:17:55.601034   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:17:55.631819   13556 logs.go:274] 0 containers: []
	W0601 04:17:55.631832   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:17:55.631888   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:17:55.662715   13556 logs.go:274] 0 containers: []
	W0601 04:17:55.662727   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:17:55.662783   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:17:55.691263   13556 logs.go:274] 0 containers: []
	W0601 04:17:55.691276   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:17:55.691336   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:17:55.719947   13556 logs.go:274] 0 containers: []
	W0601 04:17:55.719960   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:17:55.719967   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:17:55.719975   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:17:55.760071   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:17:55.760085   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:17:55.772241   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:17:55.772254   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:17:55.824683   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:17:55.824695   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:17:55.824705   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:17:55.837232   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:17:55.837244   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:17:57.888654   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05137519s)
	I0601 04:18:00.390551   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:00.481062   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:00.511882   13556 logs.go:274] 0 containers: []
	W0601 04:18:00.511896   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:00.511955   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:00.540493   13556 logs.go:274] 0 containers: []
	W0601 04:18:00.540506   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:00.540560   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:00.569751   13556 logs.go:274] 0 containers: []
	W0601 04:18:00.569764   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:00.569819   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:00.599255   13556 logs.go:274] 0 containers: []
	W0601 04:18:00.599268   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:00.599322   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:00.628551   13556 logs.go:274] 0 containers: []
	W0601 04:18:00.628563   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:00.628620   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:00.658879   13556 logs.go:274] 0 containers: []
	W0601 04:18:00.658897   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:00.658965   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:00.689105   13556 logs.go:274] 0 containers: []
	W0601 04:18:00.689118   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:00.689177   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:00.717927   13556 logs.go:274] 0 containers: []
	W0601 04:18:00.717939   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:00.717946   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:00.717953   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:00.729857   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:00.729869   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:00.781397   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:00.781409   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:00.781415   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:00.793584   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:00.793595   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:02.850639   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057008974s)
	I0601 04:18:02.850749   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:02.850756   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:05.390782   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:05.481498   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:05.513348   13556 logs.go:274] 0 containers: []
	W0601 04:18:05.513361   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:05.513418   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:05.544461   13556 logs.go:274] 0 containers: []
	W0601 04:18:05.544474   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:05.544534   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:05.573627   13556 logs.go:274] 0 containers: []
	W0601 04:18:05.573639   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:05.573695   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:05.602233   13556 logs.go:274] 0 containers: []
	W0601 04:18:05.602246   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:05.602302   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:05.632649   13556 logs.go:274] 0 containers: []
	W0601 04:18:05.632661   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:05.632717   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:05.661461   13556 logs.go:274] 0 containers: []
	W0601 04:18:05.661473   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:05.661530   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:05.690449   13556 logs.go:274] 0 containers: []
	W0601 04:18:05.690461   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:05.690517   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:05.720337   13556 logs.go:274] 0 containers: []
	W0601 04:18:05.720351   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:05.720360   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:05.720367   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:05.759027   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:05.759039   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:05.771230   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:05.771243   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:05.823870   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:05.823882   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:05.823889   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:05.835700   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:05.835711   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:07.895553   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059805753s)
	I0601 04:18:10.396984   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:10.481117   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:10.511848   13556 logs.go:274] 0 containers: []
	W0601 04:18:10.511860   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:10.511919   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:10.542609   13556 logs.go:274] 0 containers: []
	W0601 04:18:10.542622   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:10.542678   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:10.571696   13556 logs.go:274] 0 containers: []
	W0601 04:18:10.571708   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:10.571764   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:10.604101   13556 logs.go:274] 0 containers: []
	W0601 04:18:10.604114   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:10.604173   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:10.640232   13556 logs.go:274] 0 containers: []
	W0601 04:18:10.640244   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:10.640299   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:10.670576   13556 logs.go:274] 0 containers: []
	W0601 04:18:10.670588   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:10.670650   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:10.699085   13556 logs.go:274] 0 containers: []
	W0601 04:18:10.699099   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:10.699171   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:10.728401   13556 logs.go:274] 0 containers: []
	W0601 04:18:10.728413   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:10.728422   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:10.728429   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:10.771594   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:10.771607   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:10.783917   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:10.783929   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:10.836854   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:10.836864   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:10.836871   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:10.848438   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:10.848449   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:12.901368   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052884102s)
	I0601 04:18:15.403723   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:15.481185   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:15.512966   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.512977   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:15.513037   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:15.544508   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.544521   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:15.544567   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:15.581483   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.581494   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:15.581555   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:15.613508   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.613522   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:15.613578   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:15.645122   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.645148   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:15.645206   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:15.675331   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.675344   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:15.675397   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:15.706115   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.706144   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:15.706233   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:15.738582   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.738596   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:15.738604   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:15.738612   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:15.803326   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:15.803338   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:15.803345   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:15.821038   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:15.821061   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:17.883284   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06218452s)
	I0601 04:18:17.883398   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:17.883406   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:17.927628   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:17.927643   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:20.440923   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:20.483332   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:20.514761   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.514773   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:20.514833   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:20.546039   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.546053   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:20.546108   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:20.575400   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.575414   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:20.575469   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:20.606603   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.606617   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:20.606680   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:20.635837   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.635849   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:20.635906   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:20.666144   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.666157   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:20.666211   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:20.694854   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.694866   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:20.694924   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:20.725318   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.725331   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:20.725338   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:20.725345   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:20.778767   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:20.778778   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:20.778785   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:20.790876   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:20.790888   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:22.843261   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05233999s)
	I0601 04:18:22.843425   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:22.843432   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:22.886071   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:22.886084   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:25.399324   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:25.481380   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:25.515313   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.515325   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:25.515385   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:25.546864   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.546877   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:25.546942   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:25.582431   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.582445   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:25.582503   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:25.622691   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.622704   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:25.622766   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:25.654669   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.654682   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:25.654738   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:25.685692   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.685706   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:25.685765   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:25.719896   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.719910   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:25.719974   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:25.755042   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.755058   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:25.755066   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:25.755074   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:25.815872   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:25.815883   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:25.815891   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:25.829154   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:25.829166   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:27.888157   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058957128s)
	I0601 04:18:27.888265   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:27.888293   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:27.929491   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:27.929508   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:30.444730   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:30.481478   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:30.511666   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.511679   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:30.511732   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:30.542700   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.542715   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:30.542772   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:30.572035   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.572047   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:30.572104   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:30.603167   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.603179   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:30.603238   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:30.632389   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.632402   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:30.632456   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:30.660425   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.660437   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:30.660494   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:30.692427   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.692440   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:30.692498   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:30.721182   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.721194   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:30.721201   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:30.721209   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:30.763615   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:30.763627   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:30.779090   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:30.779105   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:30.837839   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:30.837850   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:30.837857   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:30.851365   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:30.851379   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:32.907858   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056443603s)
	I0601 04:18:35.408111   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:35.483017   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:35.513087   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.513099   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:35.513153   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:35.541148   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.541161   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:35.541222   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:35.569639   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.569652   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:35.569708   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:35.599189   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.599201   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:35.599254   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:35.628983   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.628995   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:35.629052   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:35.658557   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.658569   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:35.658623   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:35.691031   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.691058   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:35.691174   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:35.721259   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.721271   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:35.721277   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:35.721284   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:35.733301   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:35.733315   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:35.785853   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:35.785866   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:35.785872   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:35.799604   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:35.799616   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:37.856133   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056481616s)
	I0601 04:18:37.856244   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:37.856250   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:40.397963   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:40.408789   13556 kubeadm.go:630] restartCluster took 4m7.458583962s
	W0601 04:18:40.408865   13556 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0601 04:18:40.408881   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:18:40.824000   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:18:40.833055   13556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:18:40.846500   13556 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:18:40.846568   13556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:18:40.859653   13556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:18:40.859688   13556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:18:41.605164   13556 out.go:204]   - Generating certificates and keys ...
	I0601 04:18:42.649022   13556 out.go:204]   - Booting up control plane ...
	W0601 04:20:37.567191   13556 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 04:20:37.567223   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:20:37.985183   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:20:37.995063   13556 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:20:37.995115   13556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:20:38.003134   13556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:20:38.003167   13556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:20:38.714980   13556 out.go:204]   - Generating certificates and keys ...
	I0601 04:20:39.157245   13556 out.go:204]   - Booting up control plane ...
	I0601 04:22:34.113538   13556 kubeadm.go:397] StartCluster complete in 8m1.165906933s
	I0601 04:22:34.113614   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:22:34.143687   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.143700   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:22:34.143755   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:22:34.173703   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.173716   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:22:34.173771   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:22:34.204244   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.204257   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:22:34.204312   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:22:34.235759   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.235775   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:22:34.235836   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:22:34.265295   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.265308   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:22:34.265362   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:22:34.294194   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.294207   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:22:34.294263   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:22:34.323578   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.323590   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:22:34.323645   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:22:34.353103   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.353115   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:22:34.353122   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:22:34.353128   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:22:34.396193   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:22:34.396212   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:22:34.408612   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:22:34.408626   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:22:34.471074   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:22:34.471086   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:22:34.471093   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:22:34.483079   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:22:34.483090   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:22:36.538288   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055125762s)
	W0601 04:22:36.538414   13556 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 04:22:36.538429   13556 out.go:239] * 
	* 
	W0601 04:22:36.538563   13556 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 04:22:36.538581   13556 out.go:239] * 
	* 
	W0601 04:22:36.539131   13556 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 04:22:36.603750   13556 out.go:177] 
	W0601 04:22:36.646990   13556 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 04:22:36.647054   13556 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 04:22:36.647091   13556 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 04:22:36.667708   13556 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220601040844-2342 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601040844-2342
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601040844-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef",
	        "Created": "2022-06-01T11:08:51.714948054Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 210556,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:14:29.397998414Z",
	            "FinishedAt": "2022-06-01T11:14:26.589423316Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hosts",
	        "LogPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef-json.log",
	        "Name": "/old-k8s-version-20220601040844-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601040844-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601040844-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601040844-2342",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601040844-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601040844-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67742c0ebbdd1f76c16da912020c2ef1bdaa88cf6af0da25d66eaecd83c8f4d5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52365"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52366"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52367"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52368"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52369"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/67742c0ebbdd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601040844-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91a44163d235",
	                        "old-k8s-version-20220601040844-2342"
	                    ],
	                    "NetworkID": "19418e1daf902e10e91ecb0632ae46e6cbb8b43c0deeca829a591ae95b7f1e4b",
	                    "EndpointID": "f03c2fa8111d36ee41f3d8b53613ddd37aee00df9d89313a9d833d5735db5784",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 2 (445.779745ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220601040844-2342 logs -n 25
E0601 04:22:40.693562    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220601040844-2342 logs -n 25: (3.556268336s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p                                                | enable-default-cni-20220601035306-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:08 PDT |
	|         | enable-default-cni-20220601035306-2342            |                                           |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220601035306-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	|         | enable-default-cni-20220601035306-2342            |                                           |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220601035306-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	|         | enable-default-cni-20220601035306-2342            |                                           |         |                |                     |                     |
	| start   | -p kubenet-20220601035306-2342                    | kubenet-20220601035306-2342               | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --network-plugin=kubenet                          |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p kubenet-20220601035306-2342                    | kubenet-20220601035306-2342               | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p kubenet-20220601035306-2342                    | kubenet-20220601035306-2342               | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	| delete  | -p                                                | disable-driver-mounts-20220601040914-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	|         | disable-driver-mounts-20220601040914-2342         |                                           |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220601040844-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:14 PDT | 01 Jun 22 04:14 PDT |
	|         | old-k8s-version-20220601040844-2342               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220601040844-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:14 PDT | 01 Jun 22 04:14 PDT |
	|         | old-k8s-version-20220601040844-2342               |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:15 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| logs    | embed-certs-20220601040915-2342                   | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| logs    | embed-certs-20220601040915-2342                   | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:17 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:18:14
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:18:14.774878   14036 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:18:14.775106   14036 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:18:14.775111   14036 out.go:309] Setting ErrFile to fd 2...
	I0601 04:18:14.775115   14036 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:18:14.775218   14036 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:18:14.775473   14036 out.go:303] Setting JSON to false
	I0601 04:18:14.790201   14036 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":4664,"bootTime":1654077630,"procs":351,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:18:14.790325   14036 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:18:14.812835   14036 out.go:177] * [no-preload-20220601041659-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:18:14.855264   14036 notify.go:193] Checking for updates...
	I0601 04:18:14.877395   14036 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:18:14.899376   14036 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:18:14.921165   14036 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:18:14.942588   14036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:18:14.964495   14036 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:18:14.986858   14036 config.go:178] Loaded profile config "no-preload-20220601041659-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:18:14.987481   14036 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:18:15.060132   14036 docker.go:137] docker version: linux-20.10.14
	I0601 04:18:15.060314   14036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:18:15.195804   14036 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:18:15.147757293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:18:15.239675   14036 out.go:177] * Using the docker driver based on existing profile
	I0601 04:18:15.261306   14036 start.go:284] selected driver: docker
	I0601 04:18:15.261319   14036 start.go:806] validating driver "docker" against &{Name:no-preload-20220601041659-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601041659-2342 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Scheduled
Stop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:18:15.261387   14036 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:18:15.263519   14036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:18:15.389107   14036 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:18:15.340671415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:18:15.389294   14036 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 04:18:15.389312   14036 cni.go:95] Creating CNI manager for ""
	I0601 04:18:15.389320   14036 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:18:15.389327   14036 start_flags.go:306] config:
	{Name:no-preload-20220601041659-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601041659-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:18:15.411303   14036 out.go:177] * Starting control plane node no-preload-20220601041659-2342 in cluster no-preload-20220601041659-2342
	I0601 04:18:15.432174   14036 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:18:15.453963   14036 out.go:177] * Pulling base image ...
	I0601 04:18:15.496035   14036 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:18:15.496046   14036 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:18:15.496172   14036 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/config.json ...
	I0601 04:18:15.496274   14036 cache.go:107] acquiring lock: {Name:mk3e9a6bf873842d2e5ca428e419405f67698986 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.496248   14036 cache.go:107] acquiring lock: {Name:mk6cdcb3277425415932624173a7b7ca3460ec43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497016   14036 cache.go:107] acquiring lock: {Name:mk5aea169468c70908c7500bcfea18f2c75c6bec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497323   14036 cache.go:107] acquiring lock: {Name:mk0ce8763eede5207a594beee88851a0e339bc7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497362   14036 cache.go:107] acquiring lock: {Name:mk735d5a3617189a069af22bcee4c9a1653c60c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497430   14036 cache.go:107] acquiring lock: {Name:mkbce65c6aa4c06171eeb95b8350c15ff2252191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497532   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0601 04:18:15.497461   14036 cache.go:107] acquiring lock: {Name:mkc7860c5e3d5dd07d6a0cd1126cb14b20ddb5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497556   14036 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 1.295562ms
	I0601 04:18:15.497574   14036 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0601 04:18:15.497985   14036 cache.go:107] acquiring lock: {Name:mk2917ee5d109fb25f09b3f463d8b7c0891736eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497986   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 exists
	I0601 04:18:15.498016   14036 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6" took 1.771839ms
	I0601 04:18:15.498038   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0601 04:18:15.498038   14036 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 succeeded
	I0601 04:18:15.498057   14036 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.82584ms
	I0601 04:18:15.498071   14036 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0601 04:18:15.498155   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 exists
	I0601 04:18:15.498156   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 exists
	I0601 04:18:15.498158   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0601 04:18:15.498169   14036 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6" took 1.113538ms
	I0601 04:18:15.498179   14036 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 succeeded
	I0601 04:18:15.498177   14036 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6" took 1.19488ms
	I0601 04:18:15.498188   14036 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 succeeded
	I0601 04:18:15.498180   14036 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 1.089655ms
	I0601 04:18:15.498208   14036 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0601 04:18:15.498227   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 exists
	I0601 04:18:15.498223   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0601 04:18:15.498233   14036 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6" took 896.28µs
	I0601 04:18:15.498240   14036 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 1.200632ms
	I0601 04:18:15.498243   14036 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 succeeded
	I0601 04:18:15.498248   14036 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0601 04:18:15.498261   14036 cache.go:87] Successfully saved all images to host disk.
	I0601 04:18:15.562486   14036 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:18:15.562503   14036 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:18:15.562514   14036 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:18:15.562561   14036 start.go:352] acquiring machines lock for no-preload-20220601041659-2342: {Name:mk58caff34cdda9e203618eaf8e1336a225589ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.562634   14036 start.go:356] acquired machines lock for "no-preload-20220601041659-2342" in 62.594µs
	I0601 04:18:15.562660   14036 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:18:15.562670   14036 fix.go:55] fixHost starting: 
	I0601 04:18:15.562891   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:18:15.632241   14036 fix.go:103] recreateIfNeeded on no-preload-20220601041659-2342: state=Stopped err=<nil>
	W0601 04:18:15.632274   14036 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:18:15.654231   14036 out.go:177] * Restarting existing docker container for "no-preload-20220601041659-2342" ...
	I0601 04:18:15.403723   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:15.481185   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:15.512966   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.512977   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:15.513037   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:15.544508   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.544521   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:15.544567   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:15.581483   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.581494   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:15.581555   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:15.613508   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.613522   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:15.613578   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:15.645122   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.645148   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:15.645206   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:15.675331   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.675344   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:15.675397   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:15.706115   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.706144   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:15.706233   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:15.738582   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.738596   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:15.738604   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:15.738612   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:15.803326   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:15.803338   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:15.803345   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:15.821038   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:15.821061   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:17.883284   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06218452s)
	I0601 04:18:17.883398   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:17.883406   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:17.927628   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:17.927643   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:15.696056   14036 cli_runner.go:164] Run: docker start no-preload-20220601041659-2342
	I0601 04:18:16.064616   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:18:16.139380   14036 kic.go:416] container "no-preload-20220601041659-2342" state is running.
	I0601 04:18:16.140133   14036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220601041659-2342
	I0601 04:18:16.223932   14036 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/config.json ...
	I0601 04:18:16.224334   14036 machine.go:88] provisioning docker machine ...
	I0601 04:18:16.224356   14036 ubuntu.go:169] provisioning hostname "no-preload-20220601041659-2342"
	I0601 04:18:16.224451   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:16.304277   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:16.304462   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:16.304479   14036 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220601041659-2342 && echo "no-preload-20220601041659-2342" | sudo tee /etc/hostname
	I0601 04:18:16.433244   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220601041659-2342
	
	I0601 04:18:16.433319   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:16.508502   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:16.508694   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:16.508709   14036 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220601041659-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220601041659-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220601041659-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:18:16.629821   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:18:16.629881   14036 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:18:16.629920   14036 ubuntu.go:177] setting up certificates
	I0601 04:18:16.629931   14036 provision.go:83] configureAuth start
	I0601 04:18:16.630007   14036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220601041659-2342
	I0601 04:18:16.783851   14036 provision.go:138] copyHostCerts
	I0601 04:18:16.783934   14036 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:18:16.783942   14036 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:18:16.784048   14036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:18:16.784282   14036 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:18:16.784290   14036 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:18:16.784348   14036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:18:16.784513   14036 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:18:16.784521   14036 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:18:16.784583   14036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:18:16.784734   14036 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220601041659-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220601041659-2342]
	I0601 04:18:16.853835   14036 provision.go:172] copyRemoteCerts
	I0601 04:18:16.853899   14036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:18:16.853944   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:16.930312   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:17.016327   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 04:18:17.035298   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 04:18:17.053766   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:18:17.073791   14036 provision.go:86] duration metric: configureAuth took 443.84114ms
	I0601 04:18:17.073803   14036 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:18:17.073938   14036 config.go:178] Loaded profile config "no-preload-20220601041659-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:18:17.073997   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.145242   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:17.145411   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:17.145424   14036 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:18:17.264247   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:18:17.264260   14036 ubuntu.go:71] root file system type: overlay
	I0601 04:18:17.264374   14036 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:18:17.264440   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.337947   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:17.338110   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:17.338170   14036 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:18:17.464122   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:18:17.464206   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.535287   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:17.535439   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:17.535452   14036 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:18:17.656716   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:18:17.656732   14036 machine.go:91] provisioned docker machine in 1.432374179s
	I0601 04:18:17.656739   14036 start.go:306] post-start starting for "no-preload-20220601041659-2342" (driver="docker")
	I0601 04:18:17.656743   14036 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:18:17.656811   14036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:18:17.656865   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.727465   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:17.821284   14036 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:18:17.825772   14036 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:18:17.825789   14036 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:18:17.825804   14036 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:18:17.825812   14036 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:18:17.825821   14036 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:18:17.825928   14036 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:18:17.826061   14036 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:18:17.826225   14036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:18:17.833508   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:18:17.850697   14036 start.go:309] post-start completed in 193.939485ms
	I0601 04:18:17.850781   14036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:18:17.850824   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.926433   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:18.009814   14036 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:18:18.014092   14036 fix.go:57] fixHost completed within 2.451394081s
	I0601 04:18:18.014102   14036 start.go:81] releasing machines lock for "no-preload-20220601041659-2342", held for 2.451434151s
	I0601 04:18:18.014172   14036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220601041659-2342
	I0601 04:18:18.086484   14036 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:18:18.086486   14036 ssh_runner.go:195] Run: systemctl --version
	I0601 04:18:18.086599   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:18.086599   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:18.167367   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:18.170314   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:18.252773   14036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:18:18.385785   14036 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:18:18.396076   14036 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:18:18.396130   14036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:18:18.405470   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:18:18.418490   14036 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:18:18.486961   14036 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:18:18.553762   14036 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:18:18.563812   14036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:18:18.628982   14036 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:18:18.638220   14036 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:18:18.674484   14036 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:18:18.753443   14036 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 04:18:18.753615   14036 cli_runner.go:164] Run: docker exec -t no-preload-20220601041659-2342 dig +short host.docker.internal
	I0601 04:18:18.881761   14036 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:18:18.881851   14036 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:18:18.886028   14036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:18:18.895724   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:18.966314   14036 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:18:18.966372   14036 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:18:18.998564   14036 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 04:18:18.998580   14036 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:18:18.998654   14036 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:18:19.075561   14036 cni.go:95] Creating CNI manager for ""
	I0601 04:18:19.075576   14036 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:18:19.075610   14036 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 04:18:19.075634   14036 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220601041659-2342 NodeName:no-preload-20220601041659-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:18:19.075750   14036 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "no-preload-20220601041659-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:18:19.075828   14036 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=no-preload-20220601041659-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601041659-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 04:18:19.075885   14036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 04:18:19.083584   14036 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:18:19.083642   14036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:18:19.090501   14036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (356 bytes)
	I0601 04:18:19.102643   14036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:18:19.114867   14036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes)
	I0601 04:18:19.127704   14036 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:18:19.131229   14036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:18:19.140635   14036 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342 for IP: 192.168.49.2
	I0601 04:18:19.140743   14036 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:18:19.140794   14036 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:18:19.140880   14036 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.key
	I0601 04:18:19.140951   14036 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/apiserver.key.dd3b5fb2
	I0601 04:18:19.141000   14036 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/proxy-client.key
	I0601 04:18:19.141188   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:18:19.141229   14036 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:18:19.141241   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:18:19.141271   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:18:19.141304   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:18:19.141334   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:18:19.141394   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:18:19.141961   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:18:19.159226   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 04:18:19.175596   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:18:19.192259   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 04:18:19.210061   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:18:19.226574   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:18:19.243363   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:18:19.260116   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:18:19.277176   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:18:19.293972   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:18:19.310746   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:18:19.327971   14036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:18:19.340461   14036 ssh_runner.go:195] Run: openssl version
	I0601 04:18:19.345544   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:18:19.353245   14036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:18:19.357005   14036 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:18:19.357044   14036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:18:19.361993   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:18:19.369033   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:18:19.376927   14036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:18:19.380775   14036 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:18:19.380813   14036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:18:19.385890   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:18:19.392971   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:18:19.400951   14036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:18:19.405006   14036 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:18:19.405058   14036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:18:19.410775   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:18:19.418340   14036 kubeadm.go:395] StartCluster: {Name:no-preload-20220601041659-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601041659-2342 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:18:19.418443   14036 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:18:19.448862   14036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:18:19.456368   14036 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:18:19.456381   14036 kubeadm.go:626] restartCluster start
	I0601 04:18:19.456423   14036 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:18:19.463881   14036 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:19.463941   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:19.568912   14036 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220601041659-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:18:19.569088   14036 kubeconfig.go:127] "no-preload-20220601041659-2342" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 04:18:19.569472   14036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:18:19.570824   14036 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:18:19.578849   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:19.578930   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:19.587781   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.440923   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:20.483332   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:20.514761   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.514773   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:20.514833   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:20.546039   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.546053   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:20.546108   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:20.575400   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.575414   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:20.575469   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:20.606603   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.606617   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:20.606680   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:20.635837   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.635849   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:20.635906   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:20.666144   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.666157   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:20.666211   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:20.694854   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.694866   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:20.694924   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:20.725318   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.725331   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:20.725338   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:20.725345   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:20.778767   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:20.778778   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:20.778785   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:20.790876   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:20.790888   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:22.843261   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05233999s)
	I0601 04:18:22.843425   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:22.843432   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:22.886071   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:22.886084   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:19.789971   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:19.799683   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:19.810034   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:19.990006   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:19.990226   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.000773   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.190022   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.190234   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.201267   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.387932   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.388133   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.399100   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.588039   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.588104   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.597935   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.788903   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.788959   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.798457   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.990036   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.990239   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.000901   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.189970   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.190109   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.202580   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.390048   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.390138   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.402774   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.590024   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.590200   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.601412   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.787917   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.787977   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.797125   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.990025   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.990212   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.001077   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.190009   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:22.190214   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.201491   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.390189   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:22.390292   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.401348   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.588348   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:22.588437   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.597421   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.597432   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:22.597484   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.605651   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.605661   14036 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 04:18:22.605669   14036 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:18:22.605721   14036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:18:22.637821   14036 docker.go:442] Stopping containers: [f3ab122c826b 1734d7965330 83819780dd93 b9768112bb8b 84611d08ab8e 48c8256f94d6 651e4e6fd977 4702c401989d f48f3e09df46 e1fc171fe8aa cd2a23f7c38c 85e4aa0cd1f6 030ece384801 b93e15c9f0f8 03abb63ba5d1 f241878ca7d9]
	I0601 04:18:22.637899   14036 ssh_runner.go:195] Run: docker stop f3ab122c826b 1734d7965330 83819780dd93 b9768112bb8b 84611d08ab8e 48c8256f94d6 651e4e6fd977 4702c401989d f48f3e09df46 e1fc171fe8aa cd2a23f7c38c 85e4aa0cd1f6 030ece384801 b93e15c9f0f8 03abb63ba5d1 f241878ca7d9
	I0601 04:18:22.668131   14036 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:18:22.678704   14036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:18:22.686703   14036 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Jun  1 11:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  1 11:17 /etc/kubernetes/scheduler.conf
	
	I0601 04:18:22.686754   14036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 04:18:22.694474   14036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 04:18:22.701738   14036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 04:18:22.708977   14036 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.709021   14036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 04:18:22.716105   14036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 04:18:22.723106   14036 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.723152   14036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 04:18:22.729915   14036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:18:22.737243   14036 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:18:22.737252   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:22.785192   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:23.493831   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:23.616417   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:23.664597   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:23.714856   14036 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:18:23.714918   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:24.224606   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:24.724651   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:24.741543   14036 api_server.go:71] duration metric: took 1.026678054s to wait for apiserver process to appear ...
	I0601 04:18:24.741573   14036 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:18:24.741609   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:24.743165   14036 api_server.go:256] stopped: https://127.0.0.1:53162/healthz: Get "https://127.0.0.1:53162/healthz": EOF
	I0601 04:18:25.399324   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:25.481380   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:25.515313   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.515325   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:25.515385   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:25.546864   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.546877   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:25.546942   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:25.582431   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.582445   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:25.582503   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:25.622691   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.622704   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:25.622766   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:25.654669   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.654682   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:25.654738   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:25.685692   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.685706   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:25.685765   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:25.719896   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.719910   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:25.719974   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:25.755042   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.755058   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:25.755066   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:25.755074   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:25.815872   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:25.815883   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:25.815891   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:25.829154   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:25.829166   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:27.888157   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058957128s)
	I0601 04:18:27.888265   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:27.888293   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:27.929491   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:27.929508   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:25.243670   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:27.652029   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 04:18:27.652045   14036 api_server.go:102] status: https://127.0.0.1:53162/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 04:18:27.743312   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:27.749868   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:18:27.749888   14036 api_server.go:102] status: https://127.0.0.1:53162/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:18:28.243386   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:28.250986   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:18:28.250999   14036 api_server.go:102] status: https://127.0.0.1:53162/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:18:28.743315   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:28.749565   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:18:28.749583   14036 api_server.go:102] status: https://127.0.0.1:53162/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:18:29.243324   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:29.249665   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 200:
	ok
	I0601 04:18:29.256790   14036 api_server.go:140] control plane version: v1.23.6
	I0601 04:18:29.256806   14036 api_server.go:130] duration metric: took 4.515177636s to wait for apiserver health ...
	I0601 04:18:29.256812   14036 cni.go:95] Creating CNI manager for ""
	I0601 04:18:29.256817   14036 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:18:29.256824   14036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:18:29.265857   14036 system_pods.go:59] 8 kube-system pods found
	I0601 04:18:29.265875   14036 system_pods.go:61] "coredns-64897985d-89vc5" [95167d56-5dd4-4982-a6ca-86bb2e4620e3] Running
	I0601 04:18:29.265879   14036 system_pods.go:61] "etcd-no-preload-20220601041659-2342" [41190448-255a-49e9-b1e9-8ea601ad0843] Running
	I0601 04:18:29.265884   14036 system_pods.go:61] "kube-apiserver-no-preload-20220601041659-2342" [68c306bb-05ab-46ec-a523-865fe75e873a] Running
	I0601 04:18:29.265893   14036 system_pods.go:61] "kube-controller-manager-no-preload-20220601041659-2342" [e54984b5-ad07-42c7-8adc-e3d945a55efe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 04:18:29.265898   14036 system_pods.go:61] "kube-proxy-fgsgh" [bdfa1c31-6750-4343-b15b-08de66100496] Running
	I0601 04:18:29.265903   14036 system_pods.go:61] "kube-scheduler-no-preload-20220601041659-2342" [5e7b361b-cc2a-420b-83d2-0f0710b6dbd4] Running
	I0601 04:18:29.265908   14036 system_pods.go:61] "metrics-server-b955d9d8-64p54" [75ee83a8-d23f-44d3-ad4a-370743a2a88d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:18:29.265915   14036 system_pods.go:61] "storage-provisioner" [401f203f-92b1-4ae2-a59c-19909e579b9a] Running
	I0601 04:18:29.265919   14036 system_pods.go:74] duration metric: took 9.090386ms to wait for pod list to return data ...
	I0601 04:18:29.265926   14036 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:18:29.268895   14036 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:18:29.268910   14036 node_conditions.go:123] node cpu capacity is 6
	I0601 04:18:29.268921   14036 node_conditions.go:105] duration metric: took 2.991293ms to run NodePressure ...
	I0601 04:18:29.268932   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:29.533767   14036 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 04:18:29.539069   14036 kubeadm.go:777] kubelet initialised
	I0601 04:18:29.539090   14036 kubeadm.go:778] duration metric: took 5.30228ms waiting for restarted kubelet to initialise ...
	I0601 04:18:29.539104   14036 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:18:29.544448   14036 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-89vc5" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.550983   14036 pod_ready.go:92] pod "coredns-64897985d-89vc5" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:29.550993   14036 pod_ready.go:81] duration metric: took 6.531028ms waiting for pod "coredns-64897985d-89vc5" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.550999   14036 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.596543   14036 pod_ready.go:92] pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:29.596558   14036 pod_ready.go:81] duration metric: took 45.552599ms waiting for pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.596566   14036 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.603596   14036 pod_ready.go:92] pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:29.603609   14036 pod_ready.go:81] duration metric: took 7.03783ms waiting for pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.603621   14036 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:30.444730   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:30.481478   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:30.511666   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.511679   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:30.511732   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:30.542700   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.542715   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:30.542772   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:30.572035   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.572047   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:30.572104   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:30.603167   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.603179   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:30.603238   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:30.632389   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.632402   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:30.632456   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:30.660425   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.660437   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:30.660494   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:30.692427   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.692440   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:30.692498   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:30.721182   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.721194   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:30.721201   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:30.721209   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:30.763615   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:30.763627   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:30.779090   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:30.779105   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:30.837839   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:30.837850   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:30.837857   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:30.851365   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:30.851379   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:32.907858   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056443603s)
	I0601 04:18:31.668923   14036 pod_ready.go:102] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:34.169366   14036 pod_ready.go:102] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:35.408111   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:35.483017   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:35.513087   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.513099   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:35.513153   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:35.541148   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.541161   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:35.541222   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:35.569639   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.569652   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:35.569708   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:35.599189   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.599201   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:35.599254   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:35.628983   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.628995   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:35.629052   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:35.658557   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.658569   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:35.658623   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:35.691031   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.691058   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:35.691174   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:35.721259   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.721271   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:35.721277   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:35.721284   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:35.733301   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:35.733315   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:35.785853   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:35.785866   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:35.785872   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:35.799604   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:35.799616   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:37.856133   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056481616s)
	I0601 04:18:37.856244   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:37.856250   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:36.666864   14036 pod_ready.go:102] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:38.669699   14036 pod_ready.go:102] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:40.397963   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:40.408789   13556 kubeadm.go:630] restartCluster took 4m7.458583962s
	W0601 04:18:40.408865   13556 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0601 04:18:40.408881   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:18:40.824000   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:18:40.833055   13556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:18:40.846500   13556 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:18:40.846568   13556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:18:40.859653   13556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:18:40.859688   13556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:18:41.605164   13556 out.go:204]   - Generating certificates and keys ...
	I0601 04:18:42.649022   13556 out.go:204]   - Booting up control plane ...
	I0601 04:18:40.667004   14036 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:40.667016   14036 pod_ready.go:81] duration metric: took 11.063266842s waiting for pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.667023   14036 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fgsgh" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.670891   14036 pod_ready.go:92] pod "kube-proxy-fgsgh" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:40.670899   14036 pod_ready.go:81] duration metric: took 3.871243ms waiting for pod "kube-proxy-fgsgh" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.670904   14036 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.675132   14036 pod_ready.go:92] pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:40.675141   14036 pod_ready.go:81] duration metric: took 4.221246ms waiting for pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.675147   14036 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:42.684353   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:44.685528   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:46.687697   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:48.688040   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:51.186606   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:53.187481   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:55.188655   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:57.687295   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:00.185897   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:02.686911   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:04.688219   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:07.185914   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:09.188825   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:11.688002   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:14.187662   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:16.188114   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:18.188168   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:20.188303   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:22.688347   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:25.186136   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:27.188737   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:29.685888   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:31.687374   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:33.688010   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:35.688223   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:38.186051   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:40.685861   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:43.184495   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:45.184738   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:47.188619   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:49.688777   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:52.187737   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:54.188673   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:56.685934   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:58.688410   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:01.185111   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:03.185648   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:05.186993   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:07.188774   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:09.189538   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:11.687713   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:13.688917   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:16.189541   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:18.689213   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:21.186825   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:23.187139   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:25.187762   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:27.687801   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:29.689029   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:32.186695   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:34.188405   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	W0601 04:20:37.567191   13556 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 04:20:37.567223   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:20:37.985183   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:20:37.995063   13556 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:20:37.995115   13556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:20:38.003134   13556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:20:38.003167   13556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:20:38.714980   13556 out.go:204]   - Generating certificates and keys ...
	I0601 04:20:36.688566   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:39.188270   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:39.157245   13556 out.go:204]   - Booting up control plane ...
	I0601 04:20:41.688898   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:44.185451   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:46.186959   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:48.685368   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:50.687302   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:53.186843   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:55.189047   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:57.189228   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:59.689548   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:02.185896   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:04.687858   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:07.189539   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:09.687226   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:11.689231   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:14.186006   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:16.188122   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:18.688041   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:20.695527   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:23.199009   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:25.203674   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:27.704456   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:30.208738   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:32.711751   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:34.714219   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:37.216949   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:39.714895   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:41.720032   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:44.217871   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:46.221799   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:48.720839   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:50.722793   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:53.221273   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:55.223691   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:57.723160   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:59.724889   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:02.222785   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:04.225050   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:06.723223   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:08.723519   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:10.726350   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:13.225782   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:15.229179   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:17.726361   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:20.226547   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:22.228014   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:24.725636   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:26.726651   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:29.225011   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:31.725015   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:33.726567   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:34.113538   13556 kubeadm.go:397] StartCluster complete in 8m1.165906933s
	I0601 04:22:34.113614   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:22:34.143687   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.143700   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:22:34.143755   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:22:34.173703   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.173716   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:22:34.173771   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:22:34.204244   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.204257   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:22:34.204312   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:22:34.235759   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.235775   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:22:34.235836   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:22:34.265295   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.265308   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:22:34.265362   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:22:34.294194   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.294207   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:22:34.294263   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:22:34.323578   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.323590   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:22:34.323645   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:22:34.353103   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.353115   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:22:34.353122   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:22:34.353128   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:22:34.396193   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:22:34.396212   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:22:34.408612   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:22:34.408626   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:22:34.471074   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:22:34.471086   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:22:34.471093   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:22:34.483079   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:22:34.483090   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:22:36.538288   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055125762s)
	W0601 04:22:36.538414   13556 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 04:22:36.538429   13556 out.go:239] * 
	W0601 04:22:36.538563   13556 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 04:22:36.538581   13556 out.go:239] * 
	W0601 04:22:36.539131   13556 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 04:22:36.603750   13556 out.go:177] 
	W0601 04:22:36.646990   13556 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 04:22:36.647054   13556 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 04:22:36.647091   13556 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 04:22:36.667708   13556 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:14:29 UTC, end at Wed 2022-06-01 11:22:38 UTC. --
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 systemd[1]: Starting Docker Application Container Engine...
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.661521825Z" level=info msg="Starting up"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.663342504Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.663395200Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.663411000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.663419036Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.664701040Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.664730081Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.664742618Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.664754909Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.669344312Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.673789964Z" level=info msg="Loading containers: start."
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.759102419Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.791878604Z" level=info msg="Loading containers: done."
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.800298543Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.800366770Z" level=info msg="Daemon has completed initialization"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 systemd[1]: Started Docker Application Container Engine.
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.826706081Z" level=info msg="API listen on [::]:2376"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.829430983Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2022-06-01T11:22:40Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  11:22:40 up  1:03,  0 users,  load average: 0.75, 0.85, 0.94
	Linux old-k8s-version-20220601040844-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:14:29 UTC, end at Wed 2022-06-01 11:22:40 UTC. --
	Jun 01 11:22:38 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 01 11:22:39 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Jun 01 11:22:39 old-k8s-version-20220601040844-2342 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 11:22:39 old-k8s-version-20220601040844-2342 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 01 11:22:39 old-k8s-version-20220601040844-2342 kubelet[14364]: I0601 11:22:39.733017   14364 server.go:410] Version: v1.16.0
	Jun 01 11:22:39 old-k8s-version-20220601040844-2342 kubelet[14364]: I0601 11:22:39.733443   14364 plugins.go:100] No cloud provider specified.
	Jun 01 11:22:39 old-k8s-version-20220601040844-2342 kubelet[14364]: I0601 11:22:39.733499   14364 server.go:773] Client rotation is on, will bootstrap in background
	Jun 01 11:22:39 old-k8s-version-20220601040844-2342 kubelet[14364]: I0601 11:22:39.735410   14364 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 01 11:22:39 old-k8s-version-20220601040844-2342 kubelet[14364]: W0601 11:22:39.736106   14364 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 01 11:22:39 old-k8s-version-20220601040844-2342 kubelet[14364]: W0601 11:22:39.736262   14364 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 01 11:22:39 old-k8s-version-20220601040844-2342 kubelet[14364]: F0601 11:22:39.736338   14364 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 01 11:22:39 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 01 11:22:39 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 01 11:22:40 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Jun 01 11:22:40 old-k8s-version-20220601040844-2342 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 11:22:40 old-k8s-version-20220601040844-2342 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 01 11:22:40 old-k8s-version-20220601040844-2342 kubelet[14388]: I0601 11:22:40.478447   14388 server.go:410] Version: v1.16.0
	Jun 01 11:22:40 old-k8s-version-20220601040844-2342 kubelet[14388]: I0601 11:22:40.478912   14388 plugins.go:100] No cloud provider specified.
	Jun 01 11:22:40 old-k8s-version-20220601040844-2342 kubelet[14388]: I0601 11:22:40.478991   14388 server.go:773] Client rotation is on, will bootstrap in background
	Jun 01 11:22:40 old-k8s-version-20220601040844-2342 kubelet[14388]: I0601 11:22:40.480896   14388 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 01 11:22:40 old-k8s-version-20220601040844-2342 kubelet[14388]: W0601 11:22:40.481575   14388 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 01 11:22:40 old-k8s-version-20220601040844-2342 kubelet[14388]: W0601 11:22:40.481670   14388 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 01 11:22:40 old-k8s-version-20220601040844-2342 kubelet[14388]: F0601 11:22:40.481735   14388 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 01 11:22:40 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 01 11:22:40 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 04:22:40.531038   14161 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 2 (467.160981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220601040844-2342" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (493.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (43.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220601040915-2342 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342
E0601 04:16:14.253046    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342: exit status 2 (16.098808267s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342
E0601 04:16:32.136382    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 04:16:39.608923    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:16:44.432334    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342: exit status 2 (16.106241996s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220601040915-2342 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342
E0601 04:16:46.990367    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601040915-2342
helpers_test.go:235: (dbg) docker inspect embed-certs-20220601040915-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ae714b59470beda1dd8e47f2522c91fdf4a3c29db96acba5b0a1860f403d7c4",
	        "Created": "2022-06-01T11:09:22.674741756Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 202553,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:10:21.816317184Z",
	            "FinishedAt": "2022-06-01T11:10:19.89091995Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/6ae714b59470beda1dd8e47f2522c91fdf4a3c29db96acba5b0a1860f403d7c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ae714b59470beda1dd8e47f2522c91fdf4a3c29db96acba5b0a1860f403d7c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ae714b59470beda1dd8e47f2522c91fdf4a3c29db96acba5b0a1860f403d7c4/hosts",
	        "LogPath": "/var/lib/docker/containers/6ae714b59470beda1dd8e47f2522c91fdf4a3c29db96acba5b0a1860f403d7c4/6ae714b59470beda1dd8e47f2522c91fdf4a3c29db96acba5b0a1860f403d7c4-json.log",
	        "Name": "/embed-certs-20220601040915-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220601040915-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220601040915-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c00e526ea718e6a817a2c321f642ddedf5ed6242c7c4b44ead6e4132c89a5ed2-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c00e526ea718e6a817a2c321f642ddedf5ed6242c7c4b44ead6e4132c89a5ed2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c00e526ea718e6a817a2c321f642ddedf5ed6242c7c4b44ead6e4132c89a5ed2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c00e526ea718e6a817a2c321f642ddedf5ed6242c7c4b44ead6e4132c89a5ed2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220601040915-2342",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220601040915-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220601040915-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220601040915-2342",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220601040915-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96e2f1ed8e8102dd45ceac0ad7b3522b9f6f6ece308d57228a1c6a08b374db18",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52128"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52129"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/96e2f1ed8e81",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220601040915-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6ae714b59470",
	                        "embed-certs-20220601040915-2342"
	                    ],
	                    "NetworkID": "f6a558ad4da186a257d88623d03151fb94f07ef2561c9ff5d9618da08cd3b226",
	                    "EndpointID": "826951ac9dffb49dc6494b49d484cc879b24edbb367178b18b7bb6bc849dec4e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220601040915-2342 logs -n 25
E0601 04:16:49.080431    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220601040915-2342 logs -n 25: (2.819688316s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p calico-20220601035308-2342                     | calico-20220601035308-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:06 PDT | 01 Jun 22 04:06 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p calico-20220601035308-2342                     | calico-20220601035308-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:07 PDT |
	| start   | -p false-20220601035307-2342                      | false-20220601035307-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:06 PDT | 01 Jun 22 04:07 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                           |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=false                     |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p false-20220601035307-2342                      | false-20220601035307-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:07 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p false-20220601035307-2342                      | false-20220601035307-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:07 PDT |
	| start   | -p bridge-20220601035306-2342                     | bridge-20220601035306-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:07 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                           |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p bridge-20220601035306-2342                     | bridge-20220601035306-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:07 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p bridge-20220601035306-2342                     | bridge-20220601035306-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	| start   | -p                                                | enable-default-cni-20220601035306-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:08 PDT |
	|         | enable-default-cni-20220601035306-2342            |                                           |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220601035306-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	|         | enable-default-cni-20220601035306-2342            |                                           |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220601035306-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	|         | enable-default-cni-20220601035306-2342            |                                           |         |                |                     |                     |
	| start   | -p kubenet-20220601035306-2342                    | kubenet-20220601035306-2342               | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --network-plugin=kubenet                          |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p kubenet-20220601035306-2342                    | kubenet-20220601035306-2342               | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p kubenet-20220601035306-2342                    | kubenet-20220601035306-2342               | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	| delete  | -p                                                | disable-driver-mounts-20220601040914-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	|         | disable-driver-mounts-20220601040914-2342         |                                           |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220601040844-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:14 PDT | 01 Jun 22 04:14 PDT |
	|         | old-k8s-version-20220601040844-2342               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220601040844-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:14 PDT | 01 Jun 22 04:14 PDT |
	|         | old-k8s-version-20220601040844-2342               |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:15 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:14:28
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:14:28.086015   13556 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:14:28.086165   13556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:14:28.086170   13556 out.go:309] Setting ErrFile to fd 2...
	I0601 04:14:28.086174   13556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:14:28.086295   13556 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:14:28.086578   13556 out.go:303] Setting JSON to false
	I0601 04:14:28.101590   13556 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":4438,"bootTime":1654077630,"procs":355,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:14:28.101682   13556 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:14:28.123877   13556 out.go:177] * [old-k8s-version-20220601040844-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:14:28.166654   13556 notify.go:193] Checking for updates...
	I0601 04:14:28.188297   13556 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:14:28.209461   13556 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:14:28.230448   13556 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:14:28.251496   13556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:14:28.272505   13556 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:14:28.294847   13556 config.go:178] Loaded profile config "old-k8s-version-20220601040844-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 04:14:28.317393   13556 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0601 04:14:28.338637   13556 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:14:28.412118   13556 docker.go:137] docker version: linux-20.10.14
	I0601 04:14:28.412264   13556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:14:28.539193   13556 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:14:28.479654171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:14:28.582897   13556 out.go:177] * Using the docker driver based on existing profile
	I0601 04:14:28.604731   13556 start.go:284] selected driver: docker
	I0601 04:14:28.604751   13556 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220601040844-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:14:28.604893   13556 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:14:28.607936   13556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:14:28.735652   13556 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:14:28.674188534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:14:28.735832   13556 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 04:14:28.735851   13556 cni.go:95] Creating CNI manager for ""
	I0601 04:14:28.735860   13556 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:14:28.735872   13556 start_flags.go:306] config:
	{Name:old-k8s-version-20220601040844-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:14:28.779373   13556 out.go:177] * Starting control plane node old-k8s-version-20220601040844-2342 in cluster old-k8s-version-20220601040844-2342
	I0601 04:14:28.800522   13556 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:14:28.821684   13556 out.go:177] * Pulling base image ...
	I0601 04:14:28.863807   13556 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 04:14:28.863829   13556 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:14:28.863901   13556 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 04:14:28.863913   13556 cache.go:57] Caching tarball of preloaded images
	I0601 04:14:28.864077   13556 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 04:14:28.864104   13556 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 04:14:28.864941   13556 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/config.json ...
	I0601 04:14:28.928843   13556 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:14:28.928860   13556 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:14:28.928872   13556 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:14:28.928926   13556 start.go:352] acquiring machines lock for old-k8s-version-20220601040844-2342: {Name:mkf87fe8c4a511c3ef565c4140ef4a74b527ad92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:14:28.929011   13556 start.go:356] acquired machines lock for "old-k8s-version-20220601040844-2342" in 58.74µs
	I0601 04:14:28.929029   13556 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:14:28.929038   13556 fix.go:55] fixHost starting: 
	I0601 04:14:28.929269   13556 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601040844-2342 --format={{.State.Status}}
	I0601 04:14:28.996525   13556 fix.go:103] recreateIfNeeded on old-k8s-version-20220601040844-2342: state=Stopped err=<nil>
	W0601 04:14:28.996561   13556 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:14:29.018678   13556 out.go:177] * Restarting existing docker container for "old-k8s-version-20220601040844-2342" ...
	I0601 04:14:26.713137   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:29.211720   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:29.040137   13556 cli_runner.go:164] Run: docker start old-k8s-version-20220601040844-2342
	I0601 04:14:29.396533   13556 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601040844-2342 --format={{.State.Status}}
	I0601 04:14:29.469773   13556 kic.go:416] container "old-k8s-version-20220601040844-2342" state is running.
	I0601 04:14:29.470677   13556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601040844-2342
	I0601 04:14:29.548417   13556 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/config.json ...
	I0601 04:14:29.548828   13556 machine.go:88] provisioning docker machine ...
	I0601 04:14:29.548849   13556 ubuntu.go:169] provisioning hostname "old-k8s-version-20220601040844-2342"
	I0601 04:14:29.548931   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:29.621945   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:29.622162   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:29.622174   13556 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220601040844-2342 && echo "old-k8s-version-20220601040844-2342" | sudo tee /etc/hostname
	I0601 04:14:29.747098   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220601040844-2342
	
	I0601 04:14:29.747180   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:29.820328   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:29.820477   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:29.820500   13556 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220601040844-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220601040844-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220601040844-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:14:29.940163   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:14:29.940186   13556 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:14:29.940211   13556 ubuntu.go:177] setting up certificates
	I0601 04:14:29.940220   13556 provision.go:83] configureAuth start
	I0601 04:14:29.940277   13556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601040844-2342
	I0601 04:14:30.010662   13556 provision.go:138] copyHostCerts
	I0601 04:14:30.010737   13556 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:14:30.010745   13556 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:14:30.010841   13556 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:14:30.011037   13556 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:14:30.011045   13556 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:14:30.011106   13556 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:14:30.011262   13556 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:14:30.011268   13556 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:14:30.011329   13556 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:14:30.011453   13556 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220601040844-2342 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220601040844-2342]
	I0601 04:14:30.286843   13556 provision.go:172] copyRemoteCerts
	I0601 04:14:30.286906   13556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:14:30.286990   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:30.358351   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:30.446260   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0601 04:14:30.462814   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 04:14:30.479664   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:14:30.496600   13556 provision.go:86] duration metric: configureAuth took 556.361696ms
	I0601 04:14:30.496613   13556 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:14:30.496772   13556 config.go:178] Loaded profile config "old-k8s-version-20220601040844-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 04:14:30.496832   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:30.590582   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:30.590744   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:30.590754   13556 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:14:30.708363   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:14:30.708375   13556 ubuntu.go:71] root file system type: overlay
	I0601 04:14:30.708495   13556 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:14:30.708557   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:30.779562   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:30.779734   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:30.779783   13556 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:14:30.905450   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:14:30.905568   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:30.975523   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:30.975670   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:30.975682   13556 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:14:31.100262   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:14:31.100277   13556 machine.go:91] provisioned docker machine in 1.551423404s
	I0601 04:14:31.100285   13556 start.go:306] post-start starting for "old-k8s-version-20220601040844-2342" (driver="docker")
	I0601 04:14:31.100304   13556 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:14:31.100385   13556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:14:31.100437   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:31.170558   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:31.256710   13556 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:14:31.260531   13556 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:14:31.260550   13556 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:14:31.260557   13556 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:14:31.260562   13556 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:14:31.260570   13556 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:14:31.260671   13556 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:14:31.260804   13556 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:14:31.260969   13556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:14:31.268042   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:14:31.284690   13556 start.go:309] post-start completed in 184.378635ms
	I0601 04:14:31.284756   13556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:14:31.284800   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:31.355208   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:31.441386   13556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:14:31.446321   13556 fix.go:57] fixHost completed within 2.517256464s
	I0601 04:14:31.446333   13556 start.go:81] releasing machines lock for "old-k8s-version-20220601040844-2342", held for 2.517286389s
	I0601 04:14:31.446396   13556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601040844-2342
	I0601 04:14:31.516485   13556 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:14:31.516500   13556 ssh_runner.go:195] Run: systemctl --version
	I0601 04:14:31.516551   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:31.516552   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:31.592361   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:31.594251   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:31.804333   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:14:31.815953   13556 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:14:31.825522   13556 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:14:31.825585   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:14:31.834978   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:14:31.847979   13556 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:14:31.913965   13556 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:14:31.999816   13556 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:14:32.009709   13556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:14:32.071375   13556 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:14:32.081029   13556 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:14:32.117180   13556 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:14:32.198594   13556 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0601 04:14:32.198786   13556 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220601040844-2342 dig +short host.docker.internal
	I0601 04:14:32.332443   13556 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:14:32.332544   13556 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:14:32.336875   13556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:14:32.346622   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:32.417145   13556 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 04:14:32.417220   13556 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:14:32.446920   13556 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 04:14:32.446935   13556 docker.go:541] Images already preloaded, skipping extraction
	I0601 04:14:32.446997   13556 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:14:32.477668   13556 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 04:14:32.477689   13556 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:14:32.477781   13556 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:14:32.550798   13556 cni.go:95] Creating CNI manager for ""
	I0601 04:14:32.550810   13556 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:14:32.550825   13556 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 04:14:32.550841   13556 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220601040844-2342 NodeName:old-k8s-version-20220601040844-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:14:32.550955   13556 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220601040844-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220601040844-2342
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:14:32.551029   13556 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220601040844-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 04:14:32.551089   13556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0601 04:14:32.558618   13556 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:14:32.558675   13556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:14:32.565664   13556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0601 04:14:32.578127   13556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:14:32.591071   13556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2146 bytes)
	I0601 04:14:32.603679   13556 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:14:32.607411   13556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:14:32.616789   13556 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342 for IP: 192.168.58.2
	I0601 04:14:32.616910   13556 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:14:32.616965   13556 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:14:32.617049   13556 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/client.key
	I0601 04:14:32.617110   13556 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.key.cee25041
	I0601 04:14:32.617164   13556 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.key
	I0601 04:14:32.617380   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:14:32.617426   13556 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:14:32.617438   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:14:32.617470   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:14:32.617545   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:14:32.617575   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:14:32.617669   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:14:32.618227   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:14:32.635461   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 04:14:32.652286   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:14:32.671018   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 04:14:32.688359   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:14:32.705117   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:14:32.724039   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:14:32.740670   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:14:32.759632   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:14:32.776280   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:14:32.793455   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:14:32.810265   13556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:14:32.823671   13556 ssh_runner.go:195] Run: openssl version
	I0601 04:14:32.829634   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:14:32.838396   13556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:14:32.842798   13556 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:14:32.842856   13556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:14:32.847925   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:14:32.855315   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:14:32.862997   13556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:14:32.866628   13556 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:14:32.866669   13556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:14:32.871768   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:14:32.878782   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:14:32.886516   13556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:14:32.890228   13556 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:14:32.890268   13556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:14:32.895408   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:14:32.904071   13556 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220601040844-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:14:32.904180   13556 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:14:32.940041   13556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:14:32.947460   13556 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:14:32.947477   13556 kubeadm.go:626] restartCluster start
	I0601 04:14:32.947520   13556 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:14:32.954241   13556 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:32.954322   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:33.025948   13556 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220601040844-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:14:33.026113   13556 kubeconfig.go:127] "old-k8s-version-20220601040844-2342" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 04:14:33.027094   13556 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:14:33.028479   13556 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:14:33.036254   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.036295   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.044520   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:31.711005   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:33.711604   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:33.246687   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.246868   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.257788   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:33.444816   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.444913   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.455791   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:33.644674   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.644862   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.655944   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:33.846690   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.846895   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.857517   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.044618   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.044715   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.054967   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.245414   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.245523   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.254445   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.445454   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.445514   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.454327   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.644792   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.644963   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.655473   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.846688   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.846841   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.858268   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.044728   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.044849   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.054648   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.246768   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.246904   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.258518   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.445824   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.445917   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.459006   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.644848   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.644981   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.655077   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.846003   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.846189   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.856593   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:36.046650   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:36.046821   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:36.056452   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:36.056461   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:36.056500   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:36.064526   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:36.064537   13556 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 04:14:36.064545   13556 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:14:36.064600   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:14:36.094502   13556 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:14:36.105031   13556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:14:36.112474   13556 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jun  1 11:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Jun  1 11:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5923 Jun  1 11:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5727 Jun  1 11:10 /etc/kubernetes/scheduler.conf
	
	I0601 04:14:36.112530   13556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 04:14:36.119709   13556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 04:14:36.127589   13556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 04:14:36.135123   13556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 04:14:36.142623   13556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:14:36.149999   13556 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:14:36.150008   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:36.200699   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:37.148880   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:37.358149   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:37.419637   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:37.470094   13556 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:14:37.470154   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:37.978831   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:35.712835   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:38.211326   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:40.213566   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:38.480959   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:38.978757   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:39.478959   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:39.979253   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:40.478829   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:40.978939   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:41.479002   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:41.978797   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:42.478992   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:42.978850   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:42.711481   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:44.711726   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:43.478836   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:43.978895   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:44.479304   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:44.978970   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:45.480933   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:45.978812   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:46.478839   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:46.978909   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:47.480857   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:47.979175   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:47.206293   13348 pod_ready.go:81] duration metric: took 4m0.00469903s waiting for pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace to be "Ready" ...
	E0601 04:14:47.206309   13348 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 04:14:47.206357   13348 pod_ready.go:38] duration metric: took 4m12.450443422s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:14:47.206385   13348 kubeadm.go:630] restartCluster took 4m22.098213868s
	W0601 04:14:47.206459   13348 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 04:14:47.206477   13348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:14:48.481101   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:48.980947   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:49.478904   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:49.978978   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:50.478963   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:50.979003   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:51.481089   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:51.979642   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:52.478906   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:52.980420   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:53.478907   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:53.978960   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:54.481043   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:54.979768   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:55.478958   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:55.979398   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:56.481048   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:56.979050   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:57.479407   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:57.979337   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:58.478988   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:58.981010   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:59.479766   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:59.979321   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:00.479311   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:00.980933   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:01.478999   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:01.979261   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:02.479982   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:02.979180   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:03.480051   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:03.980590   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:04.479052   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:04.979458   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:05.481240   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:05.979974   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:06.479186   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:06.979066   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:07.479325   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:07.981279   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:08.479532   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:08.979591   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:09.479222   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:09.979845   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:10.479574   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:10.979559   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:11.479793   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:11.979666   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:12.481279   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:12.981040   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:13.479755   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:13.979822   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:14.480950   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:14.979150   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:15.481354   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:15.980964   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:16.479268   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:16.980881   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:17.479254   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:17.979479   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:18.479959   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:18.980556   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:19.479459   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:19.980773   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:20.479361   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:20.979442   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:21.481299   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:21.979515   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:22.479254   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:22.979294   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:25.514791   13348 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.30787928s)
	I0601 04:15:25.514848   13348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:15:25.524659   13348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:15:25.532340   13348 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:15:25.532382   13348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:15:25.539449   13348 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:15:25.539478   13348 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:15:26.001465   13348 out.go:204]   - Generating certificates and keys ...
	I0601 04:15:23.480422   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:23.979385   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:24.479212   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:24.979328   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:25.479798   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:25.979268   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:26.480583   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:26.980525   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:27.479476   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:27.979316   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:26.900021   13348 out.go:204]   - Booting up control plane ...
	I0601 04:15:28.479579   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:28.979569   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:29.479329   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:29.979458   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:30.479302   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:30.981410   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:31.479357   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:31.979445   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:32.481142   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:32.979961   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:33.441758   13348 out.go:204]   - Configuring RBAC rules ...
	I0601 04:15:33.817366   13348 cni.go:95] Creating CNI manager for ""
	I0601 04:15:33.817380   13348 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:15:33.817411   13348 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 04:15:33.817474   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:33.817477   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=embed-certs-20220601040915-2342 minikube.k8s.io/updated_at=2022_06_01T04_15_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:33.829913   13348 ops.go:34] apiserver oom_adj: -16
	I0601 04:15:33.952431   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:34.583662   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:35.083687   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:33.479768   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:33.981202   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:34.479961   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:34.981313   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:35.480043   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:35.979601   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:36.481401   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:36.981605   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:37.479451   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:37.508591   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.508605   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:37.508660   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:37.537446   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.537458   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:37.537516   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:37.567876   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.567889   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:37.567948   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:37.598482   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.598495   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:37.598558   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:37.628232   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.628246   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:37.628311   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:37.657793   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.657804   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:37.657857   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:37.686640   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.686653   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:37.686709   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:37.715419   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.715431   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:37.715444   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:37.715451   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:15:35.583696   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:36.083821   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:36.584050   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:37.083830   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:37.583765   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:38.085669   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:38.583915   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:39.084747   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:39.584082   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:40.083913   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:39.769146   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053659443s)
	I0601 04:15:39.769292   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:39.769300   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:39.807272   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:39.807284   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:39.819291   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:39.819303   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:39.871102   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:39.871120   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:39.871129   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:42.383986   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:42.479616   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:42.510337   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.510350   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:42.510410   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:42.539205   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.539218   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:42.539278   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:42.568639   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.568652   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:42.568706   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:42.599882   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.599895   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:42.599958   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:42.635852   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.635869   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:42.635931   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:42.667445   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.667458   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:42.667520   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:42.698074   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.698087   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:42.698144   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:42.728427   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.728443   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:42.728450   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:42.728456   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:42.767219   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:42.767231   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:42.778821   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:42.778833   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:42.831064   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:42.831076   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:42.831082   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:42.843486   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:42.843502   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:15:40.584384   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:41.085783   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:41.584457   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:42.085550   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:42.584350   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:43.083869   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:43.585909   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:44.085711   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:44.583859   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:45.084457   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:45.583962   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:46.084550   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:46.583879   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:46.636471   13348 kubeadm.go:1045] duration metric: took 12.818913549s to wait for elevateKubeSystemPrivileges.
	I0601 04:15:46.636486   13348 kubeadm.go:397] StartCluster complete in 5m21.563724425s
	I0601 04:15:46.636505   13348 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:15:46.636584   13348 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:15:46.637339   13348 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:15:47.152035   13348 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220601040915-2342" rescaled to 1
	I0601 04:15:47.152072   13348 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:15:47.152097   13348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:15:47.152118   13348 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 04:15:47.152237   13348 config.go:178] Loaded profile config "embed-certs-20220601040915-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:15:47.191646   13348 out.go:177] * Verifying Kubernetes components...
	I0601 04:15:47.191727   13348 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220601040915-2342"
	I0601 04:15:47.191733   13348 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220601040915-2342"
	I0601 04:15:47.237676   13348 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220601040915-2342"
	I0601 04:15:47.237679   13348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:15:47.237684   13348 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220601040915-2342"
	W0601 04:15:47.237698   13348 addons.go:165] addon storage-provisioner should already be in state true
	I0601 04:15:47.191741   13348 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220601040915-2342"
	I0601 04:15:47.237732   13348 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220601040915-2342"
	I0601 04:15:47.191737   13348 addons.go:65] Setting dashboard=true in profile "embed-certs-20220601040915-2342"
	W0601 04:15:47.237743   13348 addons.go:165] addon metrics-server should already be in state true
	I0601 04:15:47.237745   13348 host.go:66] Checking if "embed-certs-20220601040915-2342" exists ...
	I0601 04:15:47.237749   13348 addons.go:153] Setting addon dashboard=true in "embed-certs-20220601040915-2342"
	W0601 04:15:47.237756   13348 addons.go:165] addon dashboard should already be in state true
	I0601 04:15:47.237768   13348 host.go:66] Checking if "embed-certs-20220601040915-2342" exists ...
	I0601 04:15:47.237780   13348 host.go:66] Checking if "embed-certs-20220601040915-2342" exists ...
	I0601 04:15:47.237964   13348 cli_runner.go:164] Run: docker container inspect embed-certs-20220601040915-2342 --format={{.State.Status}}
	I0601 04:15:47.238061   13348 cli_runner.go:164] Run: docker container inspect embed-certs-20220601040915-2342 --format={{.State.Status}}
	I0601 04:15:47.238448   13348 cli_runner.go:164] Run: docker container inspect embed-certs-20220601040915-2342 --format={{.State.Status}}
	I0601 04:15:47.238927   13348 cli_runner.go:164] Run: docker container inspect embed-certs-20220601040915-2342 --format={{.State.Status}}
	I0601 04:15:47.248228   13348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 04:15:47.254630   13348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220601040915-2342
	I0601 04:15:47.386625   13348 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 04:15:47.369115   13348 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220601040915-2342"
	I0601 04:15:47.388021   13348 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220601040915-2342" to be "Ready" ...
	I0601 04:15:47.407471   13348 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 04:15:47.407487   13348 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	W0601 04:15:47.407525   13348 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:15:47.407546   13348 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:15:47.416521   13348 node_ready.go:49] node "embed-certs-20220601040915-2342" has status "Ready":"True"
	I0601 04:15:47.428627   13348 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 04:15:47.449634   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:15:47.449634   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 04:15:47.449673   13348 host.go:66] Checking if "embed-certs-20220601040915-2342" exists ...
	I0601 04:15:47.449679   13348 node_ready.go:38] duration metric: took 42.197918ms waiting for node "embed-certs-20220601040915-2342" to be "Ready" ...
	I0601 04:15:47.449694   13348 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:15:47.449740   13348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601040915-2342
	I0601 04:15:47.449740   13348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601040915-2342
	I0601 04:15:47.450209   13348 cli_runner.go:164] Run: docker container inspect embed-certs-20220601040915-2342 --format={{.State.Status}}
	I0601 04:15:47.456248   13348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-qbslw" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:47.470426   13348 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 04:15:44.896657   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053119768s)
	I0601 04:15:47.398865   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:47.481492   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:47.529068   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.529088   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:47.529149   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:47.586875   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.586904   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:47.586983   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:47.638013   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.638050   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:47.638123   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:47.689527   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.689546   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:47.689618   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:47.725472   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.725488   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:47.725560   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:47.765312   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.765326   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:47.765394   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:47.796175   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.796187   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:47.796245   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:47.829157   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.829171   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:47.829180   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:47.829188   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:47.875377   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:47.875396   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:47.888770   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:47.888784   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:47.976340   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:47.976363   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:47.976372   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:47.991532   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:47.991545   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:15:47.491699   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 04:15:47.491721   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 04:15:47.491830   13348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601040915-2342
	I0601 04:15:47.580709   13348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52125 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601040915-2342/id_rsa Username:docker}
	I0601 04:15:47.580929   13348 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:15:47.580946   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:15:47.581038   13348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601040915-2342
	I0601 04:15:47.588901   13348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52125 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601040915-2342/id_rsa Username:docker}
	I0601 04:15:47.592673   13348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52125 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601040915-2342/id_rsa Username:docker}
	I0601 04:15:47.669890   13348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52125 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601040915-2342/id_rsa Username:docker}
	I0601 04:15:47.710232   13348 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 04:15:47.710248   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 04:15:47.720739   13348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:15:47.727295   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 04:15:47.727314   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 04:15:47.733044   13348 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 04:15:47.733063   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 04:15:47.748770   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 04:15:47.748784   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 04:15:47.752991   13348 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:15:47.753004   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 04:15:47.811405   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 04:15:47.811417   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 04:15:47.822172   13348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:15:47.828766   13348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:15:47.913411   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 04:15:47.913427   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 04:15:47.954346   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 04:15:47.954364   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 04:15:48.047102   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 04:15:48.047116   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 04:15:48.129305   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 04:15:48.129327   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 04:15:48.141560   13348 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 04:15:48.221155   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 04:15:48.221172   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 04:15:48.245359   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:15:48.245378   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 04:15:48.335188   13348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:15:48.528294   13348 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220601040915-2342"
	I0601 04:15:49.440786   13348 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.105557548s)
	I0601 04:15:49.518034   13348 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0601 04:15:49.555070   13348 addons.go:417] enableAddons completed in 2.402933141s
	I0601 04:15:49.559841   13348 pod_ready.go:102] pod "coredns-64897985d-qbslw" in "kube-system" namespace has status "Ready":"False"
	I0601 04:15:51.482021   13348 pod_ready.go:92] pod "coredns-64897985d-qbslw" in "kube-system" namespace has status "Ready":"True"
	I0601 04:15:51.482038   13348 pod_ready.go:81] duration metric: took 4.011460184s waiting for pod "coredns-64897985d-qbslw" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.482045   13348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.488500   13348 pod_ready.go:92] pod "etcd-embed-certs-20220601040915-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:15:51.488510   13348 pod_ready.go:81] duration metric: took 6.460705ms waiting for pod "etcd-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.488517   13348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.495997   13348 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220601040915-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:15:51.496014   13348 pod_ready.go:81] duration metric: took 7.484769ms waiting for pod "kube-apiserver-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.496026   13348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.501668   13348 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220601040915-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:15:51.501678   13348 pod_ready.go:81] duration metric: took 5.644143ms waiting for pod "kube-controller-manager-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.501686   13348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7mb57" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.508475   13348 pod_ready.go:92] pod "kube-proxy-7mb57" in "kube-system" namespace has status "Ready":"True"
	I0601 04:15:51.508486   13348 pod_ready.go:81] duration metric: took 6.793508ms waiting for pod "kube-proxy-7mb57" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.508495   13348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.909005   13348 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220601040915-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:15:51.909021   13348 pod_ready.go:81] duration metric: took 400.514079ms waiting for pod "kube-scheduler-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.909031   13348 pod_ready.go:38] duration metric: took 4.459273789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:15:51.909053   13348 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:15:51.909117   13348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:51.922754   13348 api_server.go:71] duration metric: took 4.770611079s to wait for apiserver process to appear ...
	I0601 04:15:51.922780   13348 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:15:51.922795   13348 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52129/healthz ...
	I0601 04:15:51.929644   13348 api_server.go:266] https://127.0.0.1:52129/healthz returned 200:
	ok
	I0601 04:15:51.931215   13348 api_server.go:140] control plane version: v1.23.6
	I0601 04:15:51.931229   13348 api_server.go:130] duration metric: took 8.442854ms to wait for apiserver health ...
	I0601 04:15:51.931234   13348 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:15:52.112099   13348 system_pods.go:59] 8 kube-system pods found
	I0601 04:15:52.112114   13348 system_pods.go:61] "coredns-64897985d-qbslw" [1546891e-3f79-4475-9e00-5dca188b84f4] Running
	I0601 04:15:52.112118   13348 system_pods.go:61] "etcd-embed-certs-20220601040915-2342" [082342e8-b730-43e0-bce6-c30c7ca25cbb] Running
	I0601 04:15:52.112123   13348 system_pods.go:61] "kube-apiserver-embed-certs-20220601040915-2342" [b9d04bd2-010a-4e74-9318-61c0dc1bc5db] Running
	I0601 04:15:52.112128   13348 system_pods.go:61] "kube-controller-manager-embed-certs-20220601040915-2342" [d4d32238-f326-47ae-bae0-8ee2bba91ab4] Running
	I0601 04:15:52.112133   13348 system_pods.go:61] "kube-proxy-7mb57" [f68290ed-e464-41c7-95b2-4f33f1235d53] Running
	I0601 04:15:52.112139   13348 system_pods.go:61] "kube-scheduler-embed-certs-20220601040915-2342" [9c01e4b8-96c1-4a80-82a7-82fb27a19fa0] Running
	I0601 04:15:52.112146   13348 system_pods.go:61] "metrics-server-b955d9d8-kww6s" [825e0282-313e-4c04-8170-bd3464a09492] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:15:52.112150   13348 system_pods.go:61] "storage-provisioner" [7e8d5700-1129-486a-af8a-2c8626a63671] Running
	I0601 04:15:52.112154   13348 system_pods.go:74] duration metric: took 180.914699ms to wait for pod list to return data ...
	I0601 04:15:52.112159   13348 default_sa.go:34] waiting for default service account to be created ...
	I0601 04:15:52.279119   13348 default_sa.go:45] found service account: "default"
	I0601 04:15:52.279132   13348 default_sa.go:55] duration metric: took 166.966933ms for default service account to be created ...
	I0601 04:15:52.279138   13348 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 04:15:52.508035   13348 system_pods.go:86] 8 kube-system pods found
	I0601 04:15:52.508051   13348 system_pods.go:89] "coredns-64897985d-qbslw" [1546891e-3f79-4475-9e00-5dca188b84f4] Running
	I0601 04:15:52.508056   13348 system_pods.go:89] "etcd-embed-certs-20220601040915-2342" [082342e8-b730-43e0-bce6-c30c7ca25cbb] Running
	I0601 04:15:52.508059   13348 system_pods.go:89] "kube-apiserver-embed-certs-20220601040915-2342" [b9d04bd2-010a-4e74-9318-61c0dc1bc5db] Running
	I0601 04:15:52.508063   13348 system_pods.go:89] "kube-controller-manager-embed-certs-20220601040915-2342" [d4d32238-f326-47ae-bae0-8ee2bba91ab4] Running
	I0601 04:15:52.508068   13348 system_pods.go:89] "kube-proxy-7mb57" [f68290ed-e464-41c7-95b2-4f33f1235d53] Running
	I0601 04:15:52.508072   13348 system_pods.go:89] "kube-scheduler-embed-certs-20220601040915-2342" [9c01e4b8-96c1-4a80-82a7-82fb27a19fa0] Running
	I0601 04:15:52.508082   13348 system_pods.go:89] "metrics-server-b955d9d8-kww6s" [825e0282-313e-4c04-8170-bd3464a09492] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:15:52.508087   13348 system_pods.go:89] "storage-provisioner" [7e8d5700-1129-486a-af8a-2c8626a63671] Running
	I0601 04:15:52.508092   13348 system_pods.go:126] duration metric: took 228.947692ms to wait for k8s-apps to be running ...
	I0601 04:15:52.508097   13348 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 04:15:52.508146   13348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:15:52.518703   13348 system_svc.go:56] duration metric: took 10.601678ms WaitForService to wait for kubelet.
	I0601 04:15:52.518718   13348 kubeadm.go:572] duration metric: took 5.366572862s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 04:15:52.518742   13348 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:15:52.679725   13348 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:15:52.679738   13348 node_conditions.go:123] node cpu capacity is 6
	I0601 04:15:52.679747   13348 node_conditions.go:105] duration metric: took 160.998552ms to run NodePressure ...
	I0601 04:15:52.679756   13348 start.go:213] waiting for startup goroutines ...
	I0601 04:15:52.710982   13348 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 04:15:52.734487   13348 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220601040915-2342" cluster and "default" namespace by default
	I0601 04:15:50.056745   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065165785s)
	I0601 04:15:52.557096   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:52.979736   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:53.015032   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.015050   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:53.015130   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:53.052874   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.052890   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:53.052980   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:53.090000   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.109400   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:53.109482   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:53.143853   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.143871   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:53.143936   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:53.176667   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.176682   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:53.176750   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:53.209287   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.209304   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:53.209363   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:53.249867   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.249882   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:53.249952   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:53.291302   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.291317   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:53.291324   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:53.291331   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:53.347312   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:53.347331   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:53.364045   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:53.364061   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:53.437580   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:53.437590   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:53.437599   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:53.452321   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:53.452356   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:15:55.517741   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065349291s)
	I0601 04:15:58.018119   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:58.479675   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:58.510300   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.510315   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:58.510379   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:58.539824   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.539837   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:58.539903   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:58.574431   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.574444   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:58.574506   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:58.608048   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.608062   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:58.608126   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:58.643132   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.643149   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:58.643270   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:58.684314   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.684331   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:58.684411   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:58.729479   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.729493   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:58.729562   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:58.763728   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.763744   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:58.763752   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:58.763760   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:58.810477   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:58.810505   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:58.831095   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:58.831117   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:58.902361   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:58.902375   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:58.902384   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:58.918761   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:58.918777   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:00.984641   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065827748s)
	I0601 04:16:03.485999   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:03.979913   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:04.010953   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.010966   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:04.011018   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:04.039669   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.039684   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:04.039747   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:04.070923   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.070936   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:04.070991   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:04.100811   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.100824   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:04.100880   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:04.131464   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.131476   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:04.131531   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:04.165158   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.165170   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:04.165224   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:04.194459   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.194472   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:04.194528   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:04.223766   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.223779   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:04.223786   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:04.223793   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:04.264008   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:04.264021   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:04.275889   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:04.275901   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:04.333158   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:04.333175   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:04.333191   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:04.347399   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:04.347412   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:06.399938   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052491924s)
	I0601 04:16:08.902254   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:08.980476   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:09.010579   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.010592   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:09.010645   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:09.038706   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.038718   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:09.038772   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:09.067068   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.067080   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:09.067135   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:09.097407   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.097419   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:09.097475   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:09.127332   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.127344   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:09.127402   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:09.157941   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.157958   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:09.158048   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:09.190368   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.190380   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:09.190435   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:09.223448   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.223461   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:09.223467   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:09.223474   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:09.265193   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:09.265207   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:09.277605   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:09.277624   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:09.331638   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:09.331655   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:09.331663   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:09.345526   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:09.345539   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:11.401324   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055748952s)
	I0601 04:16:13.902794   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:13.981915   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:14.012909   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.012922   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:14.012976   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:14.043088   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.043100   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:14.043156   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:14.073109   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.073121   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:14.073177   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:14.102553   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.102567   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:14.102621   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:14.132315   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.132329   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:14.132376   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:14.161620   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.161633   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:14.161691   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:14.190400   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.190413   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:14.190472   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:14.220208   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.220221   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:14.220228   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:14.220238   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:14.260342   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:14.260355   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:14.273591   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:14.273605   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:14.325967   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:14.325979   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:14.325986   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:14.338048   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:14.338059   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:16.397631   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059537775s)
	I0601 04:16:18.898002   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:18.980224   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:19.011707   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.011721   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:19.011789   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:19.041107   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.041118   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:19.041173   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:19.069931   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.069945   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:19.070004   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:19.099021   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.099032   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:19.099088   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:19.127973   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.127994   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:19.128051   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:19.156955   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.156968   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:19.157023   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:19.186132   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.186144   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:19.186203   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:19.215364   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.215375   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:19.215382   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:19.215390   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:19.227400   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:19.227412   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:21.281212   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053766355s)
	I0601 04:16:21.281318   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:21.281326   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:21.320693   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:21.320705   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:21.332980   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:21.332992   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:21.385783   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:23.888184   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:23.981315   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:24.012581   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.012595   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:24.012650   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:24.042236   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.042248   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:24.042307   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:24.070098   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.070111   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:24.070163   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:24.098624   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.098637   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:24.098696   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:24.127561   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.127574   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:24.127630   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:24.157059   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.157071   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:24.157129   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:24.187116   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.187135   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:24.187211   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:24.216004   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.216017   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:24.216024   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:24.216030   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:24.255821   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:24.255835   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:24.267821   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:24.267832   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:24.319990   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:24.320002   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:24.320010   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:24.331836   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:24.331847   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:26.392627   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060744494s)
	I0601 04:16:28.895005   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:28.981573   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:29.012096   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.012109   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:29.012164   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:29.040693   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.040707   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:29.040760   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:29.070396   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.070409   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:29.070478   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:29.100948   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.100961   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:29.101017   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:29.130251   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.130263   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:29.130318   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:29.158697   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.158709   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:29.158764   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:29.187980   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.187993   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:29.188049   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:29.216948   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.216959   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:29.216970   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:29.216977   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:29.256025   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:29.256038   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:29.267334   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:29.267346   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:29.319728   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:29.319745   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:29.319752   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:29.331962   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:29.331973   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:31.389033   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057024918s)
	I0601 04:16:33.889268   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:33.980563   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:34.010516   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.010529   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:34.010584   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:34.039957   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.039968   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:34.040022   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:34.069056   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.069070   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:34.069126   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:34.099006   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.099022   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:34.099080   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:34.128051   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.128065   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:34.128123   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:34.157852   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.157865   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:34.157922   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:34.187417   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.187429   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:34.187484   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:34.217119   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.217131   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:34.217138   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:34.217146   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:34.269395   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:34.269405   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:34.269413   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:34.280972   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:34.280984   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:36.337032   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056013856s)
	I0601 04:16:36.337139   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:36.337145   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:36.376237   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:36.376250   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:38.890370   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:38.982134   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:39.013111   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.013124   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:39.013178   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:39.042635   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.042649   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:39.042702   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:39.072345   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.072358   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:39.072420   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:39.101587   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.101601   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:39.101655   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:39.130972   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.130985   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:39.131049   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:39.160564   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.160577   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:39.160630   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:39.190701   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.190714   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:39.190766   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:39.219934   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.219947   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:39.219954   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:39.219961   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:39.231641   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:39.231652   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:39.283515   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:39.283528   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:39.283536   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:39.295882   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:39.295893   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:41.351066   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055135322s)
	I0601 04:16:41.351176   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:41.351183   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:43.892267   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:43.980540   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:44.012230   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.012242   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:44.012300   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:44.042000   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.042012   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:44.042066   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:44.070514   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.070527   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:44.070580   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:44.098378   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.098391   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:44.098453   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:44.128346   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.128359   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:44.128418   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:44.160355   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.160369   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:44.160421   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:44.189319   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.189331   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:44.189396   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:44.217737   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.217749   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:44.217756   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:44.217763   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:44.257762   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:44.257775   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:44.269620   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:44.269632   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:44.322533   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:44.322543   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:44.322550   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:44.334650   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:44.334662   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:46.388281   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053585675s)
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:10:21 UTC, end at Wed 2022-06-01 11:16:48 UTC. --
	Jun 01 11:15:03 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:03.621274806Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=334e558a3e3d7771b69c650766fa4071c71be0bffbf850fe62c36f85c63b096e
	Jun 01 11:15:03 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:03.649811217Z" level=info msg="ignoring event" container=334e558a3e3d7771b69c650766fa4071c71be0bffbf850fe62c36f85c63b096e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:03 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:03.752058597Z" level=info msg="ignoring event" container=e6590e02b57542b0a2b063a9f10bba0fa1b3209cda39ee86722c2b1bb2d1783f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:13 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:13.885582940Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=cd267b371119a9e69687a1f8d01e41d736bd88a92e300fdcec1cd6e26c2ebd6a
	Jun 01 11:15:13 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:13.915165447Z" level=info msg="ignoring event" container=cd267b371119a9e69687a1f8d01e41d736bd88a92e300fdcec1cd6e26c2ebd6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:14 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:14.025258913Z" level=info msg="ignoring event" container=f3f004f45cf4f6d48a4ba695c8af3521e93e1eb334f5688050a73c0f678a2075 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.089174790Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=a2455cc6958db6dac7520df085e4e7105df6e60816adbcb757cb10e3d22fe7a5
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.144127462Z" level=info msg="ignoring event" container=a2455cc6958db6dac7520df085e4e7105df6e60816adbcb757cb10e3d22fe7a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.268583161Z" level=info msg="ignoring event" container=57564088e68e3c5f56f6b873cbd231cde7a465250c712e448d8803440daa5622 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.361715591Z" level=info msg="ignoring event" container=c1c15366aaa4e5c7eb0fde3e2dfe6f0630bd7f99c4d585a819e95e6233875c52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.467329413Z" level=info msg="ignoring event" container=73852db45230c7651088161b35620122b27807e3340b690fa6b9b5c36c096ccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.571062751Z" level=info msg="ignoring event" container=1972b04f44b654c0ac154c275ec4ec89fcef03ceb631e3cb523501db2276744a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.672610365Z" level=info msg="ignoring event" container=e917162c8f74cd338048948ffd928ddb59d16e3041540ba8e746d78acdf867da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:49 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:49.218649145Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:15:49 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:49.218744924Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:15:49 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:49.220043202Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:15:50 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:50.943039966Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 11:15:51 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:51.136076715Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 11:15:54 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:54.594238610Z" level=info msg="ignoring event" container=1e6e37e16aff36fae1b7d43b2a85230a251e1a79843de08da47a8498dc126134 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:54 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:54.620197127Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	Jun 01 11:15:54 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:54.848655289Z" level=info msg="ignoring event" container=91d292e0b34d63a48ccc400a869798e07b06afd2064f00846cf8fa6f6330f78f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:16:00 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:16:00.970054779Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:16:00 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:16:00.970118376Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:16:00 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:16:00.971602420Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:16:12 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:16:12.156594319Z" level=info msg="ignoring event" container=2b1a3c9eea7c22a8f29ec8082e8210599d818f54e7f0f7cc43a7e0a503f4acf1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	2b1a3c9eea7c2       a90209bb39e3d                                                                                    38 seconds ago       Exited              dashboard-metrics-scraper   2                   ece79194d732f
	1b4aac678305d       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   50 seconds ago       Running             kubernetes-dashboard        0                   f4ad7f4b10acb
	cf34a3255d826       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   f870f72070f27
	c6a78927eea84       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   e37b5972ae9d8
	a840f06fa35cd       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   521ccde101085
	d2234fc2c5bdc       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   f567cc3d66047
	edcf7b7cdc57c       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   f29e8787130b5
	0a65d7b6c4bf2       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   4d9c874090ced
	6d35436fc8f75       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   52de0c20b3aa4
	
	* 
	* ==> coredns [c6a78927eea8] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220601040915-2342
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220601040915-2342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=embed-certs-20220601040915-2342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T04_15_33_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:15:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220601040915-2342
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:16:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:16:46 +0000   Wed, 01 Jun 2022 11:15:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:16:46 +0000   Wed, 01 Jun 2022 11:15:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:16:46 +0000   Wed, 01 Jun 2022 11:15:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 11:16:46 +0000   Wed, 01 Jun 2022 11:16:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    embed-certs-20220601040915-2342
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                aaf2e84f-5c7b-4669-bc95-4bc03b406078
	  Boot ID:                    f65ff030-0ce1-451f-b056-a175624cc17c
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-qbslw                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     63s
	  kube-system                 etcd-embed-certs-20220601040915-2342                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         75s
	  kube-system                 kube-apiserver-embed-certs-20220601040915-2342             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-embed-certs-20220601040915-2342    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-7mb57                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-scheduler-embed-certs-20220601040915-2342             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 metrics-server-b955d9d8-kww6s                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         61s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-ktbl2                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-7fjk8                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 62s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    82s (x5 over 82s)  kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x4 over 82s)  kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  82s (x5 over 82s)  kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasSufficientMemory
	  Normal  Starting                 76s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s                kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                65s                kubelet     Node embed-certs-20220601040915-2342 status is now: NodeReady
	  Normal  Starting                 3s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s                 kubelet     Node embed-certs-20220601040915-2342 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                 kubelet     Node embed-certs-20220601040915-2342 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [0a65d7b6c4bf] <==
	* {"level":"info","ts":"2022-06-01T11:15:28.276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-01T11:15:28.276Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-01T11:15:28.277Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:15:28.277Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:15:28.277Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:15:28.277Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:15:28.277Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:15:28.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:15:28.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:15:28.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:15:28.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:15:28.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:embed-certs-20220601040915-2342 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:15:28.966Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:15:28.966Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:15:28.966Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:15:28.966Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:15:28.966Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:15:28.966Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:16:49 up 57 min,  0 users,  load average: 0.66, 0.85, 0.96
	Linux embed-certs-20220601040915-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [edcf7b7cdc57] <==
	* I0601 11:15:31.955537       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:15:32.064729       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:15:32.069218       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0601 11:15:32.070055       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:15:32.073237       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:15:32.748289       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:15:33.656634       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:15:33.664707       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:15:33.675790       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:15:33.836643       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:15:46.272042       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:15:46.454598       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:15:47.004606       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:15:48.513582       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.101.67.57]
	E0601 11:15:48.524062       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	W0601 11:15:49.414627       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:15:49.414684       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:15:49.414690       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:15:49.426697       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.101.231.156]
	I0601 11:15:49.435185       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.100.52.123]
	W0601 11:16:49.372609       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:16:49.372661       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:16:49.372667       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [6d35436fc8f7] <==
	* I0601 11:15:46.448568       1 range_allocator.go:374] Set node embed-certs-20220601040915-2342 PodCIDR to [10.244.0.0/24]
	I0601 11:15:46.449649       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:15:46.456436       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0601 11:15:46.458546       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7mb57"
	I0601 11:15:46.469804       1 shared_informer.go:247] Caches are synced for TTL 
	I0601 11:15:46.472852       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:15:46.653383       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:15:46.659291       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-9vq84"
	I0601 11:15:46.915658       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:15:46.926914       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:15:46.926931       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:15:48.325482       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0601 11:15:48.338406       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-kww6s"
	I0601 11:15:49.312583       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0601 11:15:49.321487       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:15:49.323381       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	E0601 11:15:49.328688       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:15:49.329602       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:15:49.334379       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:15:49.334664       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:15:49.335172       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:15:49.341475       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-7fjk8"
	I0601 11:15:49.341545       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-ktbl2"
	E0601 11:16:46.208742       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:16:46.272226       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [a840f06fa35c] <==
	* I0601 11:15:46.985658       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:15:46.985766       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:15:46.985804       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:15:47.002290       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:15:47.002377       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:15:47.002398       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:15:47.002414       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:15:47.002671       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:15:47.003244       1 config.go:317] "Starting service config controller"
	I0601 11:15:47.003307       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:15:47.003250       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:15:47.003479       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:15:47.103844       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:15:47.103865       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [d2234fc2c5bd] <==
	* W0601 11:15:30.661647       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:15:30.661680       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:15:30.661651       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:15:30.661691       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:15:30.661732       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:15:30.661797       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:15:31.631147       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:15:31.631267       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 11:15:31.661723       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:15:31.661742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:15:31.733014       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:15:31.733067       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:15:31.740895       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:15:31.740928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:15:31.767354       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:15:31.767399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:15:31.783597       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:15:31.783737       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:15:31.829585       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:15:31.829678       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0601 11:15:32.152506       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0601 11:15:34.071500       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 11:15:34.072186       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 11:15:34.212856       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 11:15:34.415017       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:10:21 UTC, end at Wed 2022-06-01 11:16:50 UTC. --
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.758976    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1546891e-3f79-4475-9e00-5dca188b84f4-config-volume\") pod \"coredns-64897985d-qbslw\" (UID: \"1546891e-3f79-4475-9e00-5dca188b84f4\") " pod="kube-system/coredns-64897985d-qbslw"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759095    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djp9r\" (UniqueName: \"kubernetes.io/projected/1546891e-3f79-4475-9e00-5dca188b84f4-kube-api-access-djp9r\") pod \"coredns-64897985d-qbslw\" (UID: \"1546891e-3f79-4475-9e00-5dca188b84f4\") " pod="kube-system/coredns-64897985d-qbslw"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759207    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/60ef3c6e-81c0-49c9-b5fb-f366fbe635ba-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-7fjk8\" (UID: \"60ef3c6e-81c0-49c9-b5fb-f366fbe635ba\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-7fjk8"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759230    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7e8d5700-1129-486a-af8a-2c8626a63671-tmp\") pod \"storage-provisioner\" (UID: \"7e8d5700-1129-486a-af8a-2c8626a63671\") " pod="kube-system/storage-provisioner"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759259    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52trh\" (UniqueName: \"kubernetes.io/projected/825e0282-313e-4c04-8170-bd3464a09492-kube-api-access-52trh\") pod \"metrics-server-b955d9d8-kww6s\" (UID: \"825e0282-313e-4c04-8170-bd3464a09492\") " pod="kube-system/metrics-server-b955d9d8-kww6s"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759274    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psxtv\" (UniqueName: \"kubernetes.io/projected/7e8d5700-1129-486a-af8a-2c8626a63671-kube-api-access-psxtv\") pod \"storage-provisioner\" (UID: \"7e8d5700-1129-486a-af8a-2c8626a63671\") " pod="kube-system/storage-provisioner"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759342    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzgsp\" (UniqueName: \"kubernetes.io/projected/f68290ed-e464-41c7-95b2-4f33f1235d53-kube-api-access-mzgsp\") pod \"kube-proxy-7mb57\" (UID: \"f68290ed-e464-41c7-95b2-4f33f1235d53\") " pod="kube-system/kube-proxy-7mb57"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759380    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/825e0282-313e-4c04-8170-bd3464a09492-tmp-dir\") pod \"metrics-server-b955d9d8-kww6s\" (UID: \"825e0282-313e-4c04-8170-bd3464a09492\") " pod="kube-system/metrics-server-b955d9d8-kww6s"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759423    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f68290ed-e464-41c7-95b2-4f33f1235d53-lib-modules\") pod \"kube-proxy-7mb57\" (UID: \"f68290ed-e464-41c7-95b2-4f33f1235d53\") " pod="kube-system/kube-proxy-7mb57"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759446    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csqvc\" (UniqueName: \"kubernetes.io/projected/60ef3c6e-81c0-49c9-b5fb-f366fbe635ba-kube-api-access-csqvc\") pod \"kubernetes-dashboard-8469778f77-7fjk8\" (UID: \"60ef3c6e-81c0-49c9-b5fb-f366fbe635ba\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-7fjk8"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759464    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f68290ed-e464-41c7-95b2-4f33f1235d53-kube-proxy\") pod \"kube-proxy-7mb57\" (UID: \"f68290ed-e464-41c7-95b2-4f33f1235d53\") " pod="kube-system/kube-proxy-7mb57"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759478    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a6a6bf42-1716-47bd-ae95-69ee9574f835-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-ktbl2\" (UID: \"a6a6bf42-1716-47bd-ae95-69ee9574f835\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-ktbl2"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759493    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x7mf\" (UniqueName: \"kubernetes.io/projected/a6a6bf42-1716-47bd-ae95-69ee9574f835-kube-api-access-5x7mf\") pod \"dashboard-metrics-scraper-56974995fc-ktbl2\" (UID: \"a6a6bf42-1716-47bd-ae95-69ee9574f835\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-ktbl2"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759505    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f68290ed-e464-41c7-95b2-4f33f1235d53-xtables-lock\") pod \"kube-proxy-7mb57\" (UID: \"f68290ed-e464-41c7-95b2-4f33f1235d53\") " pod="kube-system/kube-proxy-7mb57"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759516    7033 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.211586    7033 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220601040915-2342\" already exists" pod="kube-system/etcd-embed-certs-20220601040915-2342"
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.326981    7033 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220601040915-2342\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220601040915-2342"
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.526142    7033 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220601040915-2342\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220601040915-2342"
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:48.721510    7033 request.go:665] Waited for 1.019386488s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.725697    7033 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220601040915-2342\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220601040915-2342"
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.861096    7033 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.861222    7033 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1546891e-3f79-4475-9e00-5dca188b84f4-config-volume podName:1546891e-3f79-4475-9e00-5dca188b84f4 nodeName:}" failed. No retries permitted until 2022-06-01 11:16:49.361205111 +0000 UTC m=+3.021811311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1546891e-3f79-4475-9e00-5dca188b84f4-config-volume") pod "coredns-64897985d-qbslw" (UID: "1546891e-3f79-4475-9e00-5dca188b84f4") : failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.861143    7033 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.861288    7033 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f68290ed-e464-41c7-95b2-4f33f1235d53-kube-proxy podName:f68290ed-e464-41c7-95b2-4f33f1235d53 nodeName:}" failed. No retries permitted until 2022-06-01 11:16:49.361264666 +0000 UTC m=+3.021870859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f68290ed-e464-41c7-95b2-4f33f1235d53-kube-proxy") pod "kube-proxy-7mb57" (UID: "f68290ed-e464-41c7-95b2-4f33f1235d53") : failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:16:49 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:49.827254    7033 scope.go:110] "RemoveContainer" containerID="2b1a3c9eea7c22a8f29ec8082e8210599d818f54e7f0f7cc43a7e0a503f4acf1"
	
	* 
	* ==> kubernetes-dashboard [1b4aac678305] <==
	* 2022/06/01 11:15:59 Using namespace: kubernetes-dashboard
	2022/06/01 11:15:59 Using in-cluster config to connect to apiserver
	2022/06/01 11:15:59 Using secret token for csrf signing
	2022/06/01 11:15:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/01 11:15:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/01 11:15:59 Successful initial request to the apiserver, version: v1.23.6
	2022/06/01 11:15:59 Generating JWE encryption key
	2022/06/01 11:15:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/01 11:15:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/01 11:16:00 Initializing JWE encryption key from synchronized object
	2022/06/01 11:16:00 Creating in-cluster Sidecar client
	2022/06/01 11:16:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 11:16:00 Serving insecurely on HTTP port: 9090
	2022/06/01 11:16:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 11:15:59 Starting overwatch
	
	* 
	* ==> storage-provisioner [cf34a3255d82] <==
	* I0601 11:15:49.225914       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 11:15:49.235296       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 11:15:49.235352       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 11:15:49.241443       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 11:15:49.241553       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220601040915-2342_76324e20-ad57-4f70-afe1-513a16e80173!
	I0601 11:15:49.242221       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4647ee28-d477-40c8-8e12-7514c6da4254", APIVersion:"v1", ResourceVersion:"506", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220601040915-2342_76324e20-ad57-4f70-afe1-513a16e80173 became leader
	I0601 11:15:49.342304       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220601040915-2342_76324e20-ad57-4f70-afe1-513a16e80173!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220601040915-2342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-kww6s
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220601040915-2342 describe pod metrics-server-b955d9d8-kww6s
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220601040915-2342 describe pod metrics-server-b955d9d8-kww6s: exit status 1 (354.83637ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-kww6s" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220601040915-2342 describe pod metrics-server-b955d9d8-kww6s: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601040915-2342
helpers_test.go:235: (dbg) docker inspect embed-certs-20220601040915-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ae714b59470beda1dd8e47f2522c91fdf4a3c29db96acba5b0a1860f403d7c4",
	        "Created": "2022-06-01T11:09:22.674741756Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 202553,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:10:21.816317184Z",
	            "FinishedAt": "2022-06-01T11:10:19.89091995Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/6ae714b59470beda1dd8e47f2522c91fdf4a3c29db96acba5b0a1860f403d7c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ae714b59470beda1dd8e47f2522c91fdf4a3c29db96acba5b0a1860f403d7c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ae714b59470beda1dd8e47f2522c91fdf4a3c29db96acba5b0a1860f403d7c4/hosts",
	        "LogPath": "/var/lib/docker/containers/6ae714b59470beda1dd8e47f2522c91fdf4a3c29db96acba5b0a1860f403d7c4/6ae714b59470beda1dd8e47f2522c91fdf4a3c29db96acba5b0a1860f403d7c4-json.log",
	        "Name": "/embed-certs-20220601040915-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220601040915-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220601040915-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c00e526ea718e6a817a2c321f642ddedf5ed6242c7c4b44ead6e4132c89a5ed2-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c00e526ea718e6a817a2c321f642ddedf5ed6242c7c4b44ead6e4132c89a5ed2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c00e526ea718e6a817a2c321f642ddedf5ed6242c7c4b44ead6e4132c89a5ed2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c00e526ea718e6a817a2c321f642ddedf5ed6242c7c4b44ead6e4132c89a5ed2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220601040915-2342",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220601040915-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220601040915-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220601040915-2342",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220601040915-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96e2f1ed8e8102dd45ceac0ad7b3522b9f6f6ece308d57228a1c6a08b374db18",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52128"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52129"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/96e2f1ed8e81",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220601040915-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6ae714b59470",
	                        "embed-certs-20220601040915-2342"
	                    ],
	                    "NetworkID": "f6a558ad4da186a257d88623d03151fb94f07ef2561c9ff5d9618da08cd3b226",
	                    "EndpointID": "826951ac9dffb49dc6494b49d484cc879b24edbb367178b18b7bb6bc849dec4e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220601040915-2342 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220601040915-2342 logs -n 25: (2.706388731s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p calico-20220601035308-2342                     | calico-20220601035308-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:07 PDT |
	| start   | -p false-20220601035307-2342                      | false-20220601035307-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:06 PDT | 01 Jun 22 04:07 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                           |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=false                     |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p false-20220601035307-2342                      | false-20220601035307-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:07 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p false-20220601035307-2342                      | false-20220601035307-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:07 PDT |
	| start   | -p bridge-20220601035306-2342                     | bridge-20220601035306-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:07 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                           |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p bridge-20220601035306-2342                     | bridge-20220601035306-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:07 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p bridge-20220601035306-2342                     | bridge-20220601035306-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	| start   | -p                                                | enable-default-cni-20220601035306-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:07 PDT | 01 Jun 22 04:08 PDT |
	|         | enable-default-cni-20220601035306-2342            |                                           |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220601035306-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	|         | enable-default-cni-20220601035306-2342            |                                           |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220601035306-2342    | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	|         | enable-default-cni-20220601035306-2342            |                                           |         |                |                     |                     |
	| start   | -p kubenet-20220601035306-2342                    | kubenet-20220601035306-2342               | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --network-plugin=kubenet                          |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	| ssh     | -p kubenet-20220601035306-2342                    | kubenet-20220601035306-2342               | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:08 PDT | 01 Jun 22 04:08 PDT |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p kubenet-20220601035306-2342                    | kubenet-20220601035306-2342               | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	| delete  | -p                                                | disable-driver-mounts-20220601040914-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	|         | disable-driver-mounts-20220601040914-2342         |                                           |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220601040844-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:14 PDT | 01 Jun 22 04:14 PDT |
	|         | old-k8s-version-20220601040844-2342               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220601040844-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:14 PDT | 01 Jun 22 04:14 PDT |
	|         | old-k8s-version-20220601040844-2342               |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:15 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| logs    | embed-certs-20220601040915-2342                   | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:14:28
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:14:28.086015   13556 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:14:28.086165   13556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:14:28.086170   13556 out.go:309] Setting ErrFile to fd 2...
	I0601 04:14:28.086174   13556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:14:28.086295   13556 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:14:28.086578   13556 out.go:303] Setting JSON to false
	I0601 04:14:28.101590   13556 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":4438,"bootTime":1654077630,"procs":355,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:14:28.101682   13556 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:14:28.123877   13556 out.go:177] * [old-k8s-version-20220601040844-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:14:28.166654   13556 notify.go:193] Checking for updates...
	I0601 04:14:28.188297   13556 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:14:28.209461   13556 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:14:28.230448   13556 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:14:28.251496   13556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:14:28.272505   13556 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:14:28.294847   13556 config.go:178] Loaded profile config "old-k8s-version-20220601040844-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 04:14:28.317393   13556 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0601 04:14:28.338637   13556 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:14:28.412118   13556 docker.go:137] docker version: linux-20.10.14
	I0601 04:14:28.412264   13556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:14:28.539193   13556 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:14:28.479654171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:14:28.582897   13556 out.go:177] * Using the docker driver based on existing profile
	I0601 04:14:28.604731   13556 start.go:284] selected driver: docker
	I0601 04:14:28.604751   13556 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220601040844-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:14:28.604893   13556 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:14:28.607936   13556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:14:28.735652   13556 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:14:28.674188534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:14:28.735832   13556 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 04:14:28.735851   13556 cni.go:95] Creating CNI manager for ""
	I0601 04:14:28.735860   13556 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:14:28.735872   13556 start_flags.go:306] config:
	{Name:old-k8s-version-20220601040844-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:14:28.779373   13556 out.go:177] * Starting control plane node old-k8s-version-20220601040844-2342 in cluster old-k8s-version-20220601040844-2342
	I0601 04:14:28.800522   13556 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:14:28.821684   13556 out.go:177] * Pulling base image ...
	I0601 04:14:28.863807   13556 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 04:14:28.863829   13556 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:14:28.863901   13556 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 04:14:28.863913   13556 cache.go:57] Caching tarball of preloaded images
	I0601 04:14:28.864077   13556 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 04:14:28.864104   13556 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 04:14:28.864941   13556 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/config.json ...
	I0601 04:14:28.928843   13556 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:14:28.928860   13556 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:14:28.928872   13556 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:14:28.928926   13556 start.go:352] acquiring machines lock for old-k8s-version-20220601040844-2342: {Name:mkf87fe8c4a511c3ef565c4140ef4a74b527ad92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:14:28.929011   13556 start.go:356] acquired machines lock for "old-k8s-version-20220601040844-2342" in 58.74µs
	I0601 04:14:28.929029   13556 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:14:28.929038   13556 fix.go:55] fixHost starting: 
	I0601 04:14:28.929269   13556 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601040844-2342 --format={{.State.Status}}
	I0601 04:14:28.996525   13556 fix.go:103] recreateIfNeeded on old-k8s-version-20220601040844-2342: state=Stopped err=<nil>
	W0601 04:14:28.996561   13556 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:14:29.018678   13556 out.go:177] * Restarting existing docker container for "old-k8s-version-20220601040844-2342" ...
	I0601 04:14:26.713137   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:29.211720   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:29.040137   13556 cli_runner.go:164] Run: docker start old-k8s-version-20220601040844-2342
	I0601 04:14:29.396533   13556 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601040844-2342 --format={{.State.Status}}
	I0601 04:14:29.469773   13556 kic.go:416] container "old-k8s-version-20220601040844-2342" state is running.
	I0601 04:14:29.470677   13556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601040844-2342
	I0601 04:14:29.548417   13556 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/config.json ...
	I0601 04:14:29.548828   13556 machine.go:88] provisioning docker machine ...
	I0601 04:14:29.548849   13556 ubuntu.go:169] provisioning hostname "old-k8s-version-20220601040844-2342"
	I0601 04:14:29.548931   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:29.621945   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:29.622162   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:29.622174   13556 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220601040844-2342 && echo "old-k8s-version-20220601040844-2342" | sudo tee /etc/hostname
	I0601 04:14:29.747098   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220601040844-2342
	
	I0601 04:14:29.747180   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:29.820328   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:29.820477   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:29.820500   13556 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220601040844-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220601040844-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220601040844-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:14:29.940163   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:14:29.940186   13556 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:14:29.940211   13556 ubuntu.go:177] setting up certificates
	I0601 04:14:29.940220   13556 provision.go:83] configureAuth start
	I0601 04:14:29.940277   13556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601040844-2342
	I0601 04:14:30.010662   13556 provision.go:138] copyHostCerts
	I0601 04:14:30.010737   13556 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:14:30.010745   13556 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:14:30.010841   13556 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:14:30.011037   13556 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:14:30.011045   13556 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:14:30.011106   13556 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:14:30.011262   13556 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:14:30.011268   13556 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:14:30.011329   13556 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:14:30.011453   13556 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220601040844-2342 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220601040844-2342]
	I0601 04:14:30.286843   13556 provision.go:172] copyRemoteCerts
	I0601 04:14:30.286906   13556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:14:30.286990   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:30.358351   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:30.446260   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0601 04:14:30.462814   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 04:14:30.479664   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:14:30.496600   13556 provision.go:86] duration metric: configureAuth took 556.361696ms
	I0601 04:14:30.496613   13556 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:14:30.496772   13556 config.go:178] Loaded profile config "old-k8s-version-20220601040844-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 04:14:30.496832   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:30.590582   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:30.590744   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:30.590754   13556 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:14:30.708363   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:14:30.708375   13556 ubuntu.go:71] root file system type: overlay
	I0601 04:14:30.708495   13556 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:14:30.708557   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:30.779562   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:30.779734   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:30.779783   13556 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:14:30.905450   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:14:30.905568   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:30.975523   13556 main.go:134] libmachine: Using SSH client type: native
	I0601 04:14:30.975670   13556 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52365 <nil> <nil>}
	I0601 04:14:30.975682   13556 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:14:31.100262   13556 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:14:31.100277   13556 machine.go:91] provisioned docker machine in 1.551423404s
	I0601 04:14:31.100285   13556 start.go:306] post-start starting for "old-k8s-version-20220601040844-2342" (driver="docker")
	I0601 04:14:31.100304   13556 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:14:31.100385   13556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:14:31.100437   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:31.170558   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:31.256710   13556 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:14:31.260531   13556 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:14:31.260550   13556 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:14:31.260557   13556 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:14:31.260562   13556 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:14:31.260570   13556 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:14:31.260671   13556 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:14:31.260804   13556 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:14:31.260969   13556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:14:31.268042   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:14:31.284690   13556 start.go:309] post-start completed in 184.378635ms
	I0601 04:14:31.284756   13556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:14:31.284800   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:31.355208   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:31.441386   13556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:14:31.446321   13556 fix.go:57] fixHost completed within 2.517256464s
	I0601 04:14:31.446333   13556 start.go:81] releasing machines lock for "old-k8s-version-20220601040844-2342", held for 2.517286389s
	I0601 04:14:31.446396   13556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601040844-2342
	I0601 04:14:31.516485   13556 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:14:31.516500   13556 ssh_runner.go:195] Run: systemctl --version
	I0601 04:14:31.516551   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:31.516552   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:31.592361   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:31.594251   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52365 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601040844-2342/id_rsa Username:docker}
	I0601 04:14:31.804333   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:14:31.815953   13556 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:14:31.825522   13556 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:14:31.825585   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:14:31.834978   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:14:31.847979   13556 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:14:31.913965   13556 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:14:31.999816   13556 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:14:32.009709   13556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:14:32.071375   13556 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:14:32.081029   13556 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:14:32.117180   13556 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:14:32.198594   13556 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0601 04:14:32.198786   13556 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220601040844-2342 dig +short host.docker.internal
	I0601 04:14:32.332443   13556 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:14:32.332544   13556 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:14:32.336875   13556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:14:32.346622   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:32.417145   13556 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 04:14:32.417220   13556 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:14:32.446920   13556 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 04:14:32.446935   13556 docker.go:541] Images already preloaded, skipping extraction
	I0601 04:14:32.446997   13556 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:14:32.477668   13556 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 04:14:32.477689   13556 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:14:32.477781   13556 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:14:32.550798   13556 cni.go:95] Creating CNI manager for ""
	I0601 04:14:32.550810   13556 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:14:32.550825   13556 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 04:14:32.550841   13556 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220601040844-2342 NodeName:old-k8s-version-20220601040844-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:14:32.550955   13556 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220601040844-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220601040844-2342
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:14:32.551029   13556 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220601040844-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 04:14:32.551089   13556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0601 04:14:32.558618   13556 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:14:32.558675   13556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:14:32.565664   13556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0601 04:14:32.578127   13556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:14:32.591071   13556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2146 bytes)
	I0601 04:14:32.603679   13556 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:14:32.607411   13556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:14:32.616789   13556 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342 for IP: 192.168.58.2
	I0601 04:14:32.616910   13556 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:14:32.616965   13556 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:14:32.617049   13556 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/client.key
	I0601 04:14:32.617110   13556 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.key.cee25041
	I0601 04:14:32.617164   13556 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.key
	I0601 04:14:32.617380   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:14:32.617426   13556 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:14:32.617438   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:14:32.617470   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:14:32.617545   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:14:32.617575   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:14:32.617669   13556 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:14:32.618227   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:14:32.635461   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 04:14:32.652286   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:14:32.671018   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601040844-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 04:14:32.688359   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:14:32.705117   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:14:32.724039   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:14:32.740670   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:14:32.759632   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:14:32.776280   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:14:32.793455   13556 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:14:32.810265   13556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:14:32.823671   13556 ssh_runner.go:195] Run: openssl version
	I0601 04:14:32.829634   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:14:32.838396   13556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:14:32.842798   13556 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:14:32.842856   13556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:14:32.847925   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:14:32.855315   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:14:32.862997   13556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:14:32.866628   13556 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:14:32.866669   13556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:14:32.871768   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:14:32.878782   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:14:32.886516   13556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:14:32.890228   13556 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:14:32.890268   13556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:14:32.895408   13556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:14:32.904071   13556 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220601040844-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601040844-2342 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:14:32.904180   13556 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:14:32.940041   13556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:14:32.947460   13556 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:14:32.947477   13556 kubeadm.go:626] restartCluster start
	I0601 04:14:32.947520   13556 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:14:32.954241   13556 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:32.954322   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601040844-2342
	I0601 04:14:33.025948   13556 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220601040844-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:14:33.026113   13556 kubeconfig.go:127] "old-k8s-version-20220601040844-2342" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 04:14:33.027094   13556 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:14:33.028479   13556 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:14:33.036254   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.036295   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.044520   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:31.711005   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:33.711604   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:33.246687   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.246868   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.257788   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:33.444816   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.444913   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.455791   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:33.644674   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.644862   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.655944   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:33.846690   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:33.846895   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:33.857517   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.044618   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.044715   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.054967   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.245414   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.245523   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.254445   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.445454   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.445514   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.454327   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.644792   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.644963   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.655473   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:34.846688   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:34.846841   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:34.858268   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.044728   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.044849   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.054648   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.246768   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.246904   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.258518   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.445824   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.445917   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.459006   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.644848   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.644981   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.655077   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:35.846003   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:35.846189   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:35.856593   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:36.046650   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:36.046821   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:36.056452   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:36.056461   13556 api_server.go:165] Checking apiserver status ...
	I0601 04:14:36.056500   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:14:36.064526   13556 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:14:36.064537   13556 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 04:14:36.064545   13556 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:14:36.064600   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:14:36.094502   13556 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:14:36.105031   13556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:14:36.112474   13556 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jun  1 11:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Jun  1 11:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5923 Jun  1 11:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5727 Jun  1 11:10 /etc/kubernetes/scheduler.conf
	
	I0601 04:14:36.112530   13556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 04:14:36.119709   13556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 04:14:36.127589   13556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 04:14:36.135123   13556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 04:14:36.142623   13556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:14:36.149999   13556 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:14:36.150008   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:36.200699   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:37.148880   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:37.358149   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:37.419637   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:14:37.470094   13556 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:14:37.470154   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:37.978831   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:35.712835   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:38.211326   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:40.213566   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:38.480959   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:38.978757   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:39.478959   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:39.979253   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:40.478829   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:40.978939   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:41.479002   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:41.978797   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:42.478992   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:42.978850   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:42.711481   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:44.711726   13348 pod_ready.go:102] pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace has status "Ready":"False"
	I0601 04:14:43.478836   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:43.978895   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:44.479304   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:44.978970   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:45.480933   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:45.978812   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:46.478839   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:46.978909   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:47.480857   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:47.979175   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:47.206293   13348 pod_ready.go:81] duration metric: took 4m0.00469903s waiting for pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace to be "Ready" ...
	E0601 04:14:47.206309   13348 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-cb4rd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 04:14:47.206357   13348 pod_ready.go:38] duration metric: took 4m12.450443422s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:14:47.206385   13348 kubeadm.go:630] restartCluster took 4m22.098213868s
	W0601 04:14:47.206459   13348 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 04:14:47.206477   13348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:14:48.481101   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:48.980947   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:49.478904   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:49.978978   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:50.478963   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:50.979003   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:51.481089   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:51.979642   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:52.478906   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:52.980420   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:53.478907   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:53.978960   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:54.481043   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:54.979768   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:55.478958   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:55.979398   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:56.481048   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:56.979050   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:57.479407   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:57.979337   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:58.478988   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:58.981010   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:59.479766   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:14:59.979321   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:00.479311   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:00.980933   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:01.478999   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:01.979261   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:02.479982   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:02.979180   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:03.480051   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:03.980590   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:04.479052   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:04.979458   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:05.481240   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:05.979974   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:06.479186   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:06.979066   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:07.479325   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:07.981279   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:08.479532   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:08.979591   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:09.479222   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:09.979845   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:10.479574   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:10.979559   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:11.479793   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:11.979666   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:12.481279   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:12.981040   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:13.479755   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:13.979822   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:14.480950   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:14.979150   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:15.481354   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:15.980964   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:16.479268   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:16.980881   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:17.479254   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:17.979479   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:18.479959   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:18.980556   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:19.479459   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:19.980773   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:20.479361   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:20.979442   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:21.481299   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:21.979515   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:22.479254   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:22.979294   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:25.514791   13348 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.30787928s)
	I0601 04:15:25.514848   13348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:15:25.524659   13348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:15:25.532340   13348 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:15:25.532382   13348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:15:25.539449   13348 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:15:25.539478   13348 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:15:26.001465   13348 out.go:204]   - Generating certificates and keys ...
	I0601 04:15:23.480422   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:23.979385   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:24.479212   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:24.979328   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:25.479798   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:25.979268   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:26.480583   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:26.980525   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:27.479476   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:27.979316   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:26.900021   13348 out.go:204]   - Booting up control plane ...
	I0601 04:15:28.479579   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:28.979569   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:29.479329   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:29.979458   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:30.479302   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:30.981410   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:31.479357   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:31.979445   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:32.481142   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:32.979961   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:33.441758   13348 out.go:204]   - Configuring RBAC rules ...
	I0601 04:15:33.817366   13348 cni.go:95] Creating CNI manager for ""
	I0601 04:15:33.817380   13348 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:15:33.817411   13348 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 04:15:33.817474   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:33.817477   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=embed-certs-20220601040915-2342 minikube.k8s.io/updated_at=2022_06_01T04_15_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:33.829913   13348 ops.go:34] apiserver oom_adj: -16
	I0601 04:15:33.952431   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:34.583662   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:35.083687   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:33.479768   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:33.981202   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:34.479961   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:34.981313   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:35.480043   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:35.979601   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:36.481401   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:36.981605   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:37.479451   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:37.508591   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.508605   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:37.508660   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:37.537446   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.537458   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:37.537516   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:37.567876   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.567889   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:37.567948   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:37.598482   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.598495   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:37.598558   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:37.628232   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.628246   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:37.628311   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:37.657793   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.657804   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:37.657857   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:37.686640   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.686653   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:37.686709   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:37.715419   13556 logs.go:274] 0 containers: []
	W0601 04:15:37.715431   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:37.715444   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:37.715451   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:15:35.583696   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:36.083821   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:36.584050   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:37.083830   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:37.583765   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:38.085669   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:38.583915   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:39.084747   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:39.584082   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:40.083913   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:39.769146   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053659443s)
	I0601 04:15:39.769292   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:39.769300   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:39.807272   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:39.807284   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:39.819291   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:39.819303   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:39.871102   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:39.871120   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:39.871129   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:42.383986   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:42.479616   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:42.510337   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.510350   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:42.510410   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:42.539205   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.539218   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:42.539278   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:42.568639   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.568652   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:42.568706   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:42.599882   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.599895   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:42.599958   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:42.635852   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.635869   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:42.635931   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:42.667445   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.667458   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:42.667520   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:42.698074   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.698087   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:42.698144   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:42.728427   13556 logs.go:274] 0 containers: []
	W0601 04:15:42.728443   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:42.728450   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:42.728456   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:42.767219   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:42.767231   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:42.778821   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:42.778833   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:42.831064   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:42.831076   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:42.831082   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:42.843486   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:42.843502   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:15:40.584384   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:41.085783   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:41.584457   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:42.085550   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:42.584350   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:43.083869   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:43.585909   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:44.085711   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:44.583859   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:45.084457   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:45.583962   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:46.084550   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:46.583879   13348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:15:46.636471   13348 kubeadm.go:1045] duration metric: took 12.818913549s to wait for elevateKubeSystemPrivileges.
	I0601 04:15:46.636486   13348 kubeadm.go:397] StartCluster complete in 5m21.563724425s
	I0601 04:15:46.636505   13348 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:15:46.636584   13348 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:15:46.637339   13348 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:15:47.152035   13348 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220601040915-2342" rescaled to 1
	I0601 04:15:47.152072   13348 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:15:47.152097   13348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:15:47.152118   13348 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 04:15:47.152237   13348 config.go:178] Loaded profile config "embed-certs-20220601040915-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:15:47.191646   13348 out.go:177] * Verifying Kubernetes components...
	I0601 04:15:47.191727   13348 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220601040915-2342"
	I0601 04:15:47.191733   13348 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220601040915-2342"
	I0601 04:15:47.237676   13348 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220601040915-2342"
	I0601 04:15:47.237679   13348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:15:47.237684   13348 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220601040915-2342"
	W0601 04:15:47.237698   13348 addons.go:165] addon storage-provisioner should already be in state true
	I0601 04:15:47.191741   13348 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220601040915-2342"
	I0601 04:15:47.237732   13348 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220601040915-2342"
	I0601 04:15:47.191737   13348 addons.go:65] Setting dashboard=true in profile "embed-certs-20220601040915-2342"
	W0601 04:15:47.237743   13348 addons.go:165] addon metrics-server should already be in state true
	I0601 04:15:47.237745   13348 host.go:66] Checking if "embed-certs-20220601040915-2342" exists ...
	I0601 04:15:47.237749   13348 addons.go:153] Setting addon dashboard=true in "embed-certs-20220601040915-2342"
	W0601 04:15:47.237756   13348 addons.go:165] addon dashboard should already be in state true
	I0601 04:15:47.237768   13348 host.go:66] Checking if "embed-certs-20220601040915-2342" exists ...
	I0601 04:15:47.237780   13348 host.go:66] Checking if "embed-certs-20220601040915-2342" exists ...
	I0601 04:15:47.237964   13348 cli_runner.go:164] Run: docker container inspect embed-certs-20220601040915-2342 --format={{.State.Status}}
	I0601 04:15:47.238061   13348 cli_runner.go:164] Run: docker container inspect embed-certs-20220601040915-2342 --format={{.State.Status}}
	I0601 04:15:47.238448   13348 cli_runner.go:164] Run: docker container inspect embed-certs-20220601040915-2342 --format={{.State.Status}}
	I0601 04:15:47.238927   13348 cli_runner.go:164] Run: docker container inspect embed-certs-20220601040915-2342 --format={{.State.Status}}
	I0601 04:15:47.248228   13348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 04:15:47.254630   13348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220601040915-2342
	I0601 04:15:47.386625   13348 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 04:15:47.369115   13348 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220601040915-2342"
	I0601 04:15:47.388021   13348 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220601040915-2342" to be "Ready" ...
	I0601 04:15:47.407471   13348 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 04:15:47.407487   13348 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	W0601 04:15:47.407525   13348 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:15:47.407546   13348 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:15:47.416521   13348 node_ready.go:49] node "embed-certs-20220601040915-2342" has status "Ready":"True"
	I0601 04:15:47.428627   13348 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 04:15:47.449634   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:15:47.449634   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 04:15:47.449673   13348 host.go:66] Checking if "embed-certs-20220601040915-2342" exists ...
	I0601 04:15:47.449679   13348 node_ready.go:38] duration metric: took 42.197918ms waiting for node "embed-certs-20220601040915-2342" to be "Ready" ...
	I0601 04:15:47.449694   13348 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:15:47.449740   13348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601040915-2342
	I0601 04:15:47.449740   13348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601040915-2342
	I0601 04:15:47.450209   13348 cli_runner.go:164] Run: docker container inspect embed-certs-20220601040915-2342 --format={{.State.Status}}
	I0601 04:15:47.456248   13348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-qbslw" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:47.470426   13348 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 04:15:44.896657   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053119768s)
	I0601 04:15:47.398865   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:47.481492   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:47.529068   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.529088   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:47.529149   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:47.586875   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.586904   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:47.586983   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:47.638013   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.638050   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:47.638123   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:47.689527   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.689546   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:47.689618   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:47.725472   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.725488   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:47.725560   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:47.765312   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.765326   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:47.765394   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:47.796175   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.796187   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:47.796245   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:47.829157   13556 logs.go:274] 0 containers: []
	W0601 04:15:47.829171   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:47.829180   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:47.829188   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:47.875377   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:47.875396   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:47.888770   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:47.888784   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:47.976340   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:47.976363   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:47.976372   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:47.991532   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:47.991545   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:15:47.491699   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 04:15:47.491721   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 04:15:47.491830   13348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601040915-2342
	I0601 04:15:47.580709   13348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52125 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601040915-2342/id_rsa Username:docker}
	I0601 04:15:47.580929   13348 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:15:47.580946   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:15:47.581038   13348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601040915-2342
	I0601 04:15:47.588901   13348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52125 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601040915-2342/id_rsa Username:docker}
	I0601 04:15:47.592673   13348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52125 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601040915-2342/id_rsa Username:docker}
	I0601 04:15:47.669890   13348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52125 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601040915-2342/id_rsa Username:docker}
	I0601 04:15:47.710232   13348 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 04:15:47.710248   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 04:15:47.720739   13348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:15:47.727295   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 04:15:47.727314   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 04:15:47.733044   13348 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 04:15:47.733063   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 04:15:47.748770   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 04:15:47.748784   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 04:15:47.752991   13348 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:15:47.753004   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 04:15:47.811405   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 04:15:47.811417   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 04:15:47.822172   13348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:15:47.828766   13348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:15:47.913411   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 04:15:47.913427   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 04:15:47.954346   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 04:15:47.954364   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 04:15:48.047102   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 04:15:48.047116   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 04:15:48.129305   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 04:15:48.129327   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 04:15:48.141560   13348 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 04:15:48.221155   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 04:15:48.221172   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 04:15:48.245359   13348 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:15:48.245378   13348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 04:15:48.335188   13348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:15:48.528294   13348 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220601040915-2342"
	I0601 04:15:49.440786   13348 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.105557548s)
	I0601 04:15:49.518034   13348 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0601 04:15:49.555070   13348 addons.go:417] enableAddons completed in 2.402933141s
	I0601 04:15:49.559841   13348 pod_ready.go:102] pod "coredns-64897985d-qbslw" in "kube-system" namespace has status "Ready":"False"
	I0601 04:15:51.482021   13348 pod_ready.go:92] pod "coredns-64897985d-qbslw" in "kube-system" namespace has status "Ready":"True"
	I0601 04:15:51.482038   13348 pod_ready.go:81] duration metric: took 4.011460184s waiting for pod "coredns-64897985d-qbslw" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.482045   13348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.488500   13348 pod_ready.go:92] pod "etcd-embed-certs-20220601040915-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:15:51.488510   13348 pod_ready.go:81] duration metric: took 6.460705ms waiting for pod "etcd-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.488517   13348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.495997   13348 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220601040915-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:15:51.496014   13348 pod_ready.go:81] duration metric: took 7.484769ms waiting for pod "kube-apiserver-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.496026   13348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.501668   13348 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220601040915-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:15:51.501678   13348 pod_ready.go:81] duration metric: took 5.644143ms waiting for pod "kube-controller-manager-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.501686   13348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7mb57" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.508475   13348 pod_ready.go:92] pod "kube-proxy-7mb57" in "kube-system" namespace has status "Ready":"True"
	I0601 04:15:51.508486   13348 pod_ready.go:81] duration metric: took 6.793508ms waiting for pod "kube-proxy-7mb57" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.508495   13348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.909005   13348 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220601040915-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:15:51.909021   13348 pod_ready.go:81] duration metric: took 400.514079ms waiting for pod "kube-scheduler-embed-certs-20220601040915-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:15:51.909031   13348 pod_ready.go:38] duration metric: took 4.459273789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:15:51.909053   13348 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:15:51.909117   13348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:51.922754   13348 api_server.go:71] duration metric: took 4.770611079s to wait for apiserver process to appear ...
	I0601 04:15:51.922780   13348 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:15:51.922795   13348 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52129/healthz ...
	I0601 04:15:51.929644   13348 api_server.go:266] https://127.0.0.1:52129/healthz returned 200:
	ok
	I0601 04:15:51.931215   13348 api_server.go:140] control plane version: v1.23.6
	I0601 04:15:51.931229   13348 api_server.go:130] duration metric: took 8.442854ms to wait for apiserver health ...
	I0601 04:15:51.931234   13348 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:15:52.112099   13348 system_pods.go:59] 8 kube-system pods found
	I0601 04:15:52.112114   13348 system_pods.go:61] "coredns-64897985d-qbslw" [1546891e-3f79-4475-9e00-5dca188b84f4] Running
	I0601 04:15:52.112118   13348 system_pods.go:61] "etcd-embed-certs-20220601040915-2342" [082342e8-b730-43e0-bce6-c30c7ca25cbb] Running
	I0601 04:15:52.112123   13348 system_pods.go:61] "kube-apiserver-embed-certs-20220601040915-2342" [b9d04bd2-010a-4e74-9318-61c0dc1bc5db] Running
	I0601 04:15:52.112128   13348 system_pods.go:61] "kube-controller-manager-embed-certs-20220601040915-2342" [d4d32238-f326-47ae-bae0-8ee2bba91ab4] Running
	I0601 04:15:52.112133   13348 system_pods.go:61] "kube-proxy-7mb57" [f68290ed-e464-41c7-95b2-4f33f1235d53] Running
	I0601 04:15:52.112139   13348 system_pods.go:61] "kube-scheduler-embed-certs-20220601040915-2342" [9c01e4b8-96c1-4a80-82a7-82fb27a19fa0] Running
	I0601 04:15:52.112146   13348 system_pods.go:61] "metrics-server-b955d9d8-kww6s" [825e0282-313e-4c04-8170-bd3464a09492] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:15:52.112150   13348 system_pods.go:61] "storage-provisioner" [7e8d5700-1129-486a-af8a-2c8626a63671] Running
	I0601 04:15:52.112154   13348 system_pods.go:74] duration metric: took 180.914699ms to wait for pod list to return data ...
	I0601 04:15:52.112159   13348 default_sa.go:34] waiting for default service account to be created ...
	I0601 04:15:52.279119   13348 default_sa.go:45] found service account: "default"
	I0601 04:15:52.279132   13348 default_sa.go:55] duration metric: took 166.966933ms for default service account to be created ...
	I0601 04:15:52.279138   13348 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 04:15:52.508035   13348 system_pods.go:86] 8 kube-system pods found
	I0601 04:15:52.508051   13348 system_pods.go:89] "coredns-64897985d-qbslw" [1546891e-3f79-4475-9e00-5dca188b84f4] Running
	I0601 04:15:52.508056   13348 system_pods.go:89] "etcd-embed-certs-20220601040915-2342" [082342e8-b730-43e0-bce6-c30c7ca25cbb] Running
	I0601 04:15:52.508059   13348 system_pods.go:89] "kube-apiserver-embed-certs-20220601040915-2342" [b9d04bd2-010a-4e74-9318-61c0dc1bc5db] Running
	I0601 04:15:52.508063   13348 system_pods.go:89] "kube-controller-manager-embed-certs-20220601040915-2342" [d4d32238-f326-47ae-bae0-8ee2bba91ab4] Running
	I0601 04:15:52.508068   13348 system_pods.go:89] "kube-proxy-7mb57" [f68290ed-e464-41c7-95b2-4f33f1235d53] Running
	I0601 04:15:52.508072   13348 system_pods.go:89] "kube-scheduler-embed-certs-20220601040915-2342" [9c01e4b8-96c1-4a80-82a7-82fb27a19fa0] Running
	I0601 04:15:52.508082   13348 system_pods.go:89] "metrics-server-b955d9d8-kww6s" [825e0282-313e-4c04-8170-bd3464a09492] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:15:52.508087   13348 system_pods.go:89] "storage-provisioner" [7e8d5700-1129-486a-af8a-2c8626a63671] Running
	I0601 04:15:52.508092   13348 system_pods.go:126] duration metric: took 228.947692ms to wait for k8s-apps to be running ...
	I0601 04:15:52.508097   13348 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 04:15:52.508146   13348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:15:52.518703   13348 system_svc.go:56] duration metric: took 10.601678ms WaitForService to wait for kubelet.
	I0601 04:15:52.518718   13348 kubeadm.go:572] duration metric: took 5.366572862s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 04:15:52.518742   13348 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:15:52.679725   13348 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:15:52.679738   13348 node_conditions.go:123] node cpu capacity is 6
	I0601 04:15:52.679747   13348 node_conditions.go:105] duration metric: took 160.998552ms to run NodePressure ...
	I0601 04:15:52.679756   13348 start.go:213] waiting for startup goroutines ...
	I0601 04:15:52.710982   13348 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 04:15:52.734487   13348 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220601040915-2342" cluster and "default" namespace by default
	I0601 04:15:50.056745   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065165785s)
	I0601 04:15:52.557096   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:52.979736   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:53.015032   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.015050   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:53.015130   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:53.052874   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.052890   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:53.052980   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:53.090000   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.109400   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:53.109482   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:53.143853   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.143871   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:53.143936   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:53.176667   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.176682   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:53.176750   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:53.209287   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.209304   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:53.209363   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:53.249867   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.249882   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:53.249952   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:53.291302   13556 logs.go:274] 0 containers: []
	W0601 04:15:53.291317   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:53.291324   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:53.291331   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:53.347312   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:53.347331   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:53.364045   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:53.364061   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:53.437580   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:53.437590   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:53.437599   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:53.452321   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:53.452356   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:15:55.517741   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065349291s)
	I0601 04:15:58.018119   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:15:58.479675   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:15:58.510300   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.510315   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:15:58.510379   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:15:58.539824   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.539837   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:15:58.539903   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:15:58.574431   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.574444   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:15:58.574506   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:15:58.608048   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.608062   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:15:58.608126   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:15:58.643132   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.643149   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:15:58.643270   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:15:58.684314   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.684331   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:15:58.684411   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:15:58.729479   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.729493   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:15:58.729562   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:15:58.763728   13556 logs.go:274] 0 containers: []
	W0601 04:15:58.763744   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:15:58.763752   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:15:58.763760   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:15:58.810477   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:15:58.810505   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:15:58.831095   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:15:58.831117   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:15:58.902361   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:15:58.902375   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:15:58.902384   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:15:58.918761   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:15:58.918777   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:00.984641   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065827748s)
	I0601 04:16:03.485999   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:03.979913   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:04.010953   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.010966   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:04.011018   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:04.039669   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.039684   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:04.039747   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:04.070923   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.070936   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:04.070991   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:04.100811   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.100824   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:04.100880   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:04.131464   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.131476   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:04.131531   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:04.165158   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.165170   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:04.165224   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:04.194459   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.194472   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:04.194528   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:04.223766   13556 logs.go:274] 0 containers: []
	W0601 04:16:04.223779   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:04.223786   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:04.223793   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:04.264008   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:04.264021   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:04.275889   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:04.275901   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:04.333158   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:04.333175   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:04.333191   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:04.347399   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:04.347412   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:06.399938   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052491924s)
	I0601 04:16:08.902254   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:08.980476   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:09.010579   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.010592   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:09.010645   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:09.038706   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.038718   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:09.038772   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:09.067068   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.067080   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:09.067135   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:09.097407   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.097419   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:09.097475   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:09.127332   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.127344   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:09.127402   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:09.157941   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.157958   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:09.158048   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:09.190368   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.190380   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:09.190435   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:09.223448   13556 logs.go:274] 0 containers: []
	W0601 04:16:09.223461   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:09.223467   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:09.223474   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:09.265193   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:09.265207   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:09.277605   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:09.277624   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:09.331638   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:09.331655   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:09.331663   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:09.345526   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:09.345539   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:11.401324   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055748952s)
	I0601 04:16:13.902794   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:13.981915   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:14.012909   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.012922   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:14.012976   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:14.043088   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.043100   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:14.043156   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:14.073109   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.073121   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:14.073177   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:14.102553   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.102567   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:14.102621   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:14.132315   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.132329   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:14.132376   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:14.161620   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.161633   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:14.161691   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:14.190400   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.190413   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:14.190472   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:14.220208   13556 logs.go:274] 0 containers: []
	W0601 04:16:14.220221   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:14.220228   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:14.220238   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:14.260342   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:14.260355   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:14.273591   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:14.273605   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:14.325967   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:14.325979   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:14.325986   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:14.338048   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:14.338059   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:16.397631   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059537775s)
	I0601 04:16:18.898002   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:18.980224   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:19.011707   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.011721   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:19.011789   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:19.041107   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.041118   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:19.041173   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:19.069931   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.069945   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:19.070004   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:19.099021   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.099032   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:19.099088   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:19.127973   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.127994   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:19.128051   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:19.156955   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.156968   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:19.157023   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:19.186132   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.186144   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:19.186203   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:19.215364   13556 logs.go:274] 0 containers: []
	W0601 04:16:19.215375   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:19.215382   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:19.215390   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:19.227400   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:19.227412   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:21.281212   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053766355s)
	I0601 04:16:21.281318   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:21.281326   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:21.320693   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:21.320705   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:21.332980   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:21.332992   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:21.385783   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:23.888184   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:23.981315   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:24.012581   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.012595   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:24.012650   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:24.042236   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.042248   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:24.042307   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:24.070098   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.070111   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:24.070163   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:24.098624   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.098637   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:24.098696   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:24.127561   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.127574   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:24.127630   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:24.157059   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.157071   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:24.157129   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:24.187116   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.187135   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:24.187211   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:24.216004   13556 logs.go:274] 0 containers: []
	W0601 04:16:24.216017   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:24.216024   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:24.216030   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:24.255821   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:24.255835   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:24.267821   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:24.267832   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:24.319990   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:24.320002   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:24.320010   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:24.331836   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:24.331847   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:26.392627   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060744494s)
	I0601 04:16:28.895005   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:28.981573   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:29.012096   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.012109   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:29.012164   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:29.040693   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.040707   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:29.040760   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:29.070396   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.070409   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:29.070478   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:29.100948   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.100961   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:29.101017   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:29.130251   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.130263   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:29.130318   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:29.158697   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.158709   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:29.158764   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:29.187980   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.187993   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:29.188049   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:29.216948   13556 logs.go:274] 0 containers: []
	W0601 04:16:29.216959   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:29.216970   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:29.216977   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:29.256025   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:29.256038   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:29.267334   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:29.267346   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:29.319728   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:29.319745   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:29.319752   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:29.331962   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:29.331973   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:31.389033   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057024918s)
	I0601 04:16:33.889268   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:33.980563   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:34.010516   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.010529   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:34.010584   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:34.039957   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.039968   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:34.040022   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:34.069056   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.069070   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:34.069126   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:34.099006   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.099022   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:34.099080   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:34.128051   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.128065   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:34.128123   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:34.157852   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.157865   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:34.157922   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:34.187417   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.187429   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:34.187484   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:34.217119   13556 logs.go:274] 0 containers: []
	W0601 04:16:34.217131   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:34.217138   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:34.217146   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:34.269395   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:34.269405   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:34.269413   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:34.280972   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:34.280984   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:36.337032   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056013856s)
	I0601 04:16:36.337139   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:36.337145   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:36.376237   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:36.376250   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:38.890370   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:38.982134   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:39.013111   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.013124   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:39.013178   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:39.042635   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.042649   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:39.042702   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:39.072345   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.072358   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:39.072420   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:39.101587   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.101601   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:39.101655   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:39.130972   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.130985   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:39.131049   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:39.160564   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.160577   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:39.160630   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:39.190701   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.190714   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:39.190766   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:39.219934   13556 logs.go:274] 0 containers: []
	W0601 04:16:39.219947   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:39.219954   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:39.219961   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:39.231641   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:39.231652   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:39.283515   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:39.283528   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:39.283536   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:39.295882   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:39.295893   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:41.351066   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055135322s)
	I0601 04:16:41.351176   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:41.351183   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:43.892267   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:16:43.980540   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:16:44.012230   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.012242   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:16:44.012300   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:16:44.042000   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.042012   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:16:44.042066   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:16:44.070514   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.070527   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:16:44.070580   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:16:44.098378   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.098391   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:16:44.098453   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:16:44.128346   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.128359   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:16:44.128418   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:16:44.160355   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.160369   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:16:44.160421   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:16:44.189319   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.189331   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:16:44.189396   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:16:44.217737   13556 logs.go:274] 0 containers: []
	W0601 04:16:44.217749   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:16:44.217756   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:16:44.217763   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:16:44.257762   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:16:44.257775   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:16:44.269620   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:16:44.269632   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:16:44.322533   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:16:44.322543   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:16:44.322550   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:16:44.334650   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:16:44.334662   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:16:46.388281   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053585675s)
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:10:21 UTC, end at Wed 2022-06-01 11:16:53 UTC. --
	Jun 01 11:15:13 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:13.915165447Z" level=info msg="ignoring event" container=cd267b371119a9e69687a1f8d01e41d736bd88a92e300fdcec1cd6e26c2ebd6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:14 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:14.025258913Z" level=info msg="ignoring event" container=f3f004f45cf4f6d48a4ba695c8af3521e93e1eb334f5688050a73c0f678a2075 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.089174790Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=a2455cc6958db6dac7520df085e4e7105df6e60816adbcb757cb10e3d22fe7a5
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.144127462Z" level=info msg="ignoring event" container=a2455cc6958db6dac7520df085e4e7105df6e60816adbcb757cb10e3d22fe7a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.268583161Z" level=info msg="ignoring event" container=57564088e68e3c5f56f6b873cbd231cde7a465250c712e448d8803440daa5622 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.361715591Z" level=info msg="ignoring event" container=c1c15366aaa4e5c7eb0fde3e2dfe6f0630bd7f99c4d585a819e95e6233875c52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.467329413Z" level=info msg="ignoring event" container=73852db45230c7651088161b35620122b27807e3340b690fa6b9b5c36c096ccf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.571062751Z" level=info msg="ignoring event" container=1972b04f44b654c0ac154c275ec4ec89fcef03ceb631e3cb523501db2276744a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:24 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:24.672610365Z" level=info msg="ignoring event" container=e917162c8f74cd338048948ffd928ddb59d16e3041540ba8e746d78acdf867da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:49 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:49.218649145Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:15:49 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:49.218744924Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:15:49 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:49.220043202Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:15:50 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:50.943039966Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 11:15:51 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:51.136076715Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 11:15:54 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:54.594238610Z" level=info msg="ignoring event" container=1e6e37e16aff36fae1b7d43b2a85230a251e1a79843de08da47a8498dc126134 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:15:54 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:54.620197127Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	Jun 01 11:15:54 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:15:54.848655289Z" level=info msg="ignoring event" container=91d292e0b34d63a48ccc400a869798e07b06afd2064f00846cf8fa6f6330f78f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:16:00 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:16:00.970054779Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:16:00 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:16:00.970118376Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:16:00 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:16:00.971602420Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:16:12 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:16:12.156594319Z" level=info msg="ignoring event" container=2b1a3c9eea7c22a8f29ec8082e8210599d818f54e7f0f7cc43a7e0a503f4acf1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:16:50 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:16:50.053238724Z" level=info msg="ignoring event" container=eaec2233a802b0e9ea82a10695708fd129df8ed64b28254450e443a3f654f91e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:16:51 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:16:51.047481629Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:16:51 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:16:51.047525854Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:16:51 embed-certs-20220601040915-2342 dockerd[130]: time="2022-06-01T11:16:51.048956221Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	eaec2233a802b       a90209bb39e3d                                                                                    4 seconds ago        Exited              dashboard-metrics-scraper   3                   ece79194d732f
	1b4aac678305d       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   54 seconds ago       Running             kubernetes-dashboard        0                   f4ad7f4b10acb
	cf34a3255d826       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   f870f72070f27
	c6a78927eea84       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   e37b5972ae9d8
	a840f06fa35cd       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   521ccde101085
	d2234fc2c5bdc       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   f567cc3d66047
	edcf7b7cdc57c       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   f29e8787130b5
	0a65d7b6c4bf2       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   4d9c874090ced
	6d35436fc8f75       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   52de0c20b3aa4
	
	* 
	* ==> coredns [c6a78927eea8] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220601040915-2342
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220601040915-2342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=embed-certs-20220601040915-2342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T04_15_33_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:15:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220601040915-2342
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:16:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:16:46 +0000   Wed, 01 Jun 2022 11:15:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:16:46 +0000   Wed, 01 Jun 2022 11:15:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:16:46 +0000   Wed, 01 Jun 2022 11:15:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 11:16:46 +0000   Wed, 01 Jun 2022 11:16:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    embed-certs-20220601040915-2342
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                aaf2e84f-5c7b-4669-bc95-4bc03b406078
	  Boot ID:                    f65ff030-0ce1-451f-b056-a175624cc17c
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-qbslw                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     67s
	  kube-system                 etcd-embed-certs-20220601040915-2342                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         79s
	  kube-system                 kube-apiserver-embed-certs-20220601040915-2342             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-controller-manager-embed-certs-20220601040915-2342    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-proxy-7mb57                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-embed-certs-20220601040915-2342             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 metrics-server-b955d9d8-kww6s                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         65s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-ktbl2                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-7fjk8                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 66s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    86s (x5 over 86s)  kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x4 over 86s)  kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  86s (x5 over 86s)  kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasSufficientMemory
	  Normal  Starting                 80s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s                kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s                kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s                kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                69s                kubelet     Node embed-certs-20220601040915-2342 status is now: NodeReady
	  Normal  Starting                 7s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet     Node embed-certs-20220601040915-2342 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s                 kubelet     Node embed-certs-20220601040915-2342 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s                 kubelet     Node embed-certs-20220601040915-2342 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [0a65d7b6c4bf] <==
	* {"level":"info","ts":"2022-06-01T11:15:28.276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-01T11:15:28.276Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-01T11:15:28.277Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:15:28.277Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:15:28.277Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:15:28.277Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:15:28.277Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:15:28.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:15:28.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:15:28.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:15:28.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:15:28.964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:embed-certs-20220601040915-2342 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:15:28.965Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:15:28.966Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:15:28.966Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:15:28.966Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:15:28.966Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:15:28.966Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:15:28.966Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:16:54 up 57 min,  0 users,  load average: 0.77, 0.87, 0.97
	Linux embed-certs-20220601040915-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [edcf7b7cdc57] <==
	* I0601 11:15:31.955537       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:15:32.064729       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:15:32.069218       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0601 11:15:32.070055       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:15:32.073237       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:15:32.748289       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:15:33.656634       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:15:33.664707       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:15:33.675790       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:15:33.836643       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:15:46.272042       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:15:46.454598       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:15:47.004606       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:15:48.513582       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.101.67.57]
	E0601 11:15:48.524062       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	W0601 11:15:49.414627       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:15:49.414684       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:15:49.414690       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:15:49.426697       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.101.231.156]
	I0601 11:15:49.435185       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.100.52.123]
	W0601 11:16:49.372609       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:16:49.372661       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:16:49.372667       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [6d35436fc8f7] <==
	* I0601 11:15:46.448568       1 range_allocator.go:374] Set node embed-certs-20220601040915-2342 PodCIDR to [10.244.0.0/24]
	I0601 11:15:46.449649       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:15:46.456436       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0601 11:15:46.458546       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7mb57"
	I0601 11:15:46.469804       1 shared_informer.go:247] Caches are synced for TTL 
	I0601 11:15:46.472852       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:15:46.653383       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:15:46.659291       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-9vq84"
	I0601 11:15:46.915658       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:15:46.926914       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:15:46.926931       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:15:48.325482       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0601 11:15:48.338406       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-kww6s"
	I0601 11:15:49.312583       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0601 11:15:49.321487       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:15:49.323381       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	E0601 11:15:49.328688       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:15:49.329602       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:15:49.334379       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:15:49.334664       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:15:49.335172       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:15:49.341475       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-7fjk8"
	I0601 11:15:49.341545       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-ktbl2"
	E0601 11:16:46.208742       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:16:46.272226       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [a840f06fa35c] <==
	* I0601 11:15:46.985658       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:15:46.985766       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:15:46.985804       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:15:47.002290       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:15:47.002377       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:15:47.002398       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:15:47.002414       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:15:47.002671       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:15:47.003244       1 config.go:317] "Starting service config controller"
	I0601 11:15:47.003307       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:15:47.003250       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:15:47.003479       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:15:47.103844       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:15:47.103865       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [d2234fc2c5bd] <==
	* W0601 11:15:30.661647       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:15:30.661680       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:15:30.661651       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:15:30.661691       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:15:30.661732       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:15:30.661797       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:15:31.631147       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:15:31.631267       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 11:15:31.661723       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:15:31.661742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:15:31.733014       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:15:31.733067       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:15:31.740895       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:15:31.740928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:15:31.767354       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:15:31.767399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:15:31.783597       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:15:31.783737       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:15:31.829585       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:15:31.829678       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0601 11:15:32.152506       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0601 11:15:34.071500       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 11:15:34.072186       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 11:15:34.212856       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 11:15:34.415017       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:10:21 UTC, end at Wed 2022-06-01 11:16:54 UTC. --
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759446    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csqvc\" (UniqueName: \"kubernetes.io/projected/60ef3c6e-81c0-49c9-b5fb-f366fbe635ba-kube-api-access-csqvc\") pod \"kubernetes-dashboard-8469778f77-7fjk8\" (UID: \"60ef3c6e-81c0-49c9-b5fb-f366fbe635ba\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-7fjk8"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759464    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f68290ed-e464-41c7-95b2-4f33f1235d53-kube-proxy\") pod \"kube-proxy-7mb57\" (UID: \"f68290ed-e464-41c7-95b2-4f33f1235d53\") " pod="kube-system/kube-proxy-7mb57"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759478    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a6a6bf42-1716-47bd-ae95-69ee9574f835-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-ktbl2\" (UID: \"a6a6bf42-1716-47bd-ae95-69ee9574f835\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-ktbl2"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759493    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x7mf\" (UniqueName: \"kubernetes.io/projected/a6a6bf42-1716-47bd-ae95-69ee9574f835-kube-api-access-5x7mf\") pod \"dashboard-metrics-scraper-56974995fc-ktbl2\" (UID: \"a6a6bf42-1716-47bd-ae95-69ee9574f835\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-ktbl2"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759505    7033 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f68290ed-e464-41c7-95b2-4f33f1235d53-xtables-lock\") pod \"kube-proxy-7mb57\" (UID: \"f68290ed-e464-41c7-95b2-4f33f1235d53\") " pod="kube-system/kube-proxy-7mb57"
	Jun 01 11:16:47 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:47.759516    7033 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.211586    7033 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220601040915-2342\" already exists" pod="kube-system/etcd-embed-certs-20220601040915-2342"
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.326981    7033 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220601040915-2342\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220601040915-2342"
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.526142    7033 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220601040915-2342\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220601040915-2342"
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:48.721510    7033 request.go:665] Waited for 1.019386488s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.725697    7033 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220601040915-2342\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220601040915-2342"
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.861096    7033 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.861222    7033 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1546891e-3f79-4475-9e00-5dca188b84f4-config-volume podName:1546891e-3f79-4475-9e00-5dca188b84f4 nodeName:}" failed. No retries permitted until 2022-06-01 11:16:49.361205111 +0000 UTC m=+3.021811311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1546891e-3f79-4475-9e00-5dca188b84f4-config-volume") pod "coredns-64897985d-qbslw" (UID: "1546891e-3f79-4475-9e00-5dca188b84f4") : failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.861143    7033 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:16:48 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:48.861288    7033 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f68290ed-e464-41c7-95b2-4f33f1235d53-kube-proxy podName:f68290ed-e464-41c7-95b2-4f33f1235d53 nodeName:}" failed. No retries permitted until 2022-06-01 11:16:49.361264666 +0000 UTC m=+3.021870859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f68290ed-e464-41c7-95b2-4f33f1235d53-kube-proxy") pod "kube-proxy-7mb57" (UID: "f68290ed-e464-41c7-95b2-4f33f1235d53") : failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:16:49 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:49.827254    7033 scope.go:110] "RemoveContainer" containerID="2b1a3c9eea7c22a8f29ec8082e8210599d818f54e7f0f7cc43a7e0a503f4acf1"
	Jun 01 11:16:50 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:50.718169    7033 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-ktbl2 through plugin: invalid network status for"
	Jun 01 11:16:50 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:50.722029    7033 scope.go:110] "RemoveContainer" containerID="2b1a3c9eea7c22a8f29ec8082e8210599d818f54e7f0f7cc43a7e0a503f4acf1"
	Jun 01 11:16:50 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:50.722273    7033 scope.go:110] "RemoveContainer" containerID="eaec2233a802b0e9ea82a10695708fd129df8ed64b28254450e443a3f654f91e"
	Jun 01 11:16:50 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:50.722445    7033 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-ktbl2_kubernetes-dashboard(a6a6bf42-1716-47bd-ae95-69ee9574f835)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-ktbl2" podUID=a6a6bf42-1716-47bd-ae95-69ee9574f835
	Jun 01 11:16:51 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:51.049446    7033 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 11:16:51 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:51.049499    7033 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 11:16:51 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:51.049635    7033 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-52trh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMess
agePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-kww6s_kube-system(825e0282-313e-4c04-8170-bd3464a09492): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 01 11:16:51 embed-certs-20220601040915-2342 kubelet[7033]: E0601 11:16:51.049668    7033 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-kww6s" podUID=825e0282-313e-4c04-8170-bd3464a09492
	Jun 01 11:16:51 embed-certs-20220601040915-2342 kubelet[7033]: I0601 11:16:51.732003    7033 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-ktbl2 through plugin: invalid network status for"
	
	* 
	* ==> kubernetes-dashboard [1b4aac678305] <==
	* 2022/06/01 11:15:59 Using namespace: kubernetes-dashboard
	2022/06/01 11:15:59 Using in-cluster config to connect to apiserver
	2022/06/01 11:15:59 Using secret token for csrf signing
	2022/06/01 11:15:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/01 11:15:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/01 11:15:59 Successful initial request to the apiserver, version: v1.23.6
	2022/06/01 11:15:59 Generating JWE encryption key
	2022/06/01 11:15:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/01 11:15:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/01 11:16:00 Initializing JWE encryption key from synchronized object
	2022/06/01 11:16:00 Creating in-cluster Sidecar client
	2022/06/01 11:16:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 11:16:00 Serving insecurely on HTTP port: 9090
	2022/06/01 11:16:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 11:15:59 Starting overwatch
	
	* 
	* ==> storage-provisioner [cf34a3255d82] <==
	* I0601 11:15:49.225914       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 11:15:49.235296       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 11:15:49.235352       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 11:15:49.241443       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 11:15:49.241553       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220601040915-2342_76324e20-ad57-4f70-afe1-513a16e80173!
	I0601 11:15:49.242221       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4647ee28-d477-40c8-8e12-7514c6da4254", APIVersion:"v1", ResourceVersion:"506", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220601040915-2342_76324e20-ad57-4f70-afe1-513a16e80173 became leader
	I0601 11:15:49.342304       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220601040915-2342_76324e20-ad57-4f70-afe1-513a16e80173!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220601040915-2342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-kww6s
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220601040915-2342 describe pod metrics-server-b955d9d8-kww6s
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220601040915-2342 describe pod metrics-server-b955d9d8-kww6s: exit status 1 (294.892124ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-kww6s" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220601040915-2342 describe pod metrics-server-b955d9d8-kww6s: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (43.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:22:51.711519    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:23:30.441091    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:24:31.667464    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:25:14.701856    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:25:49.690881    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 04:25:54.768487    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:26:11.960050    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:26:37.756138    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:26:47.038056    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:26:49.126030    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:27:27.580996    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:27:35.017567    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:27:40.696356    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:27:51.715865    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:27:53.011542    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
E0601 04:27:53.017992    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
E0601 04:27:53.029473    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
E0601 04:27:53.051716    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
E0601 04:27:53.093912    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
E0601 04:27:53.174128    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
E0601 04:27:53.334997    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
E0601 04:27:53.655874    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
E0601 04:27:54.296095    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
E0601 04:27:55.576689    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
E0601 04:27:58.137424    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:28:03.259768    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
E0601 04:28:10.091896    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:28:13.500067    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:28:30.445364    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:28:33.980812    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:28:50.632684    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:29:00.631620    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:29:14.841976    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:29:14.941621    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:29:31.671441    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:29:53.504078    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:30:14.707480    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:30:23.684984    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:30:36.863803    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
E0601 04:30:43.752947    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:30:49.695346    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:31:11.963854    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:276: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 2 (524.548345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:276: status error: exit status 2 (may be ok)
start_stop_delete_test.go:276: "old-k8s-version-20220601040844-2342" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601040844-2342
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601040844-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef",
	        "Created": "2022-06-01T11:08:51.714948054Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 210556,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:14:29.397998414Z",
	            "FinishedAt": "2022-06-01T11:14:26.589423316Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hosts",
	        "LogPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef-json.log",
	        "Name": "/old-k8s-version-20220601040844-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601040844-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601040844-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601040844-2342",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601040844-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601040844-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67742c0ebbdd1f76c16da912020c2ef1bdaa88cf6af0da25d66eaecd83c8f4d5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52365"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52366"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52367"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52368"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52369"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/67742c0ebbdd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601040844-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91a44163d235",
	                        "old-k8s-version-20220601040844-2342"
	                    ],
	                    "NetworkID": "19418e1daf902e10e91ecb0632ae46e6cbb8b43c0deeca829a591ae95b7f1e4b",
	                    "EndpointID": "f03c2fa8111d36ee41f3d8b53613ddd37aee00df9d89313a9d833d5735db5784",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 2 (466.560958ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220601040844-2342 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220601040844-2342 logs -n 25: (3.519857891s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| unpause | -p                                                | embed-certs-20220601040915-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601040915-2342                   | embed-certs-20220601040915-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220601040915-2342                   | embed-certs-20220601040915-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601040915-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                                |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601040915-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:17 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601040844-2342               | old-k8s-version-20220601040844-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:22 PDT | 01 Jun 22 04:22 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:23 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | no-preload-20220601041659-2342                    | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | no-preload-20220601041659-2342                    | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:25 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:25 PDT | 01 Jun 22 04:25 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:25 PDT | 01 Jun 22 04:26 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:26 PDT | 01 Jun 22 04:26 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:26 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:31 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:31 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:26:00
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:26:00.480154   14580 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:26:00.480367   14580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:26:00.480372   14580 out.go:309] Setting ErrFile to fd 2...
	I0601 04:26:00.480376   14580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:26:00.480472   14580 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:26:00.480724   14580 out.go:303] Setting JSON to false
	I0601 04:26:00.495972   14580 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5130,"bootTime":1654077630,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:26:00.496148   14580 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:26:00.518702   14580 out.go:177] * [default-k8s-different-port-20220601042455-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:26:00.540230   14580 notify.go:193] Checking for updates...
	I0601 04:26:00.562166   14580 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:26:00.584188   14580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:26:00.606131   14580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:26:00.628160   14580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:26:00.649242   14580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:26:00.671625   14580 config.go:178] Loaded profile config "default-k8s-different-port-20220601042455-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:26:00.672286   14580 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:26:00.744341   14580 docker.go:137] docker version: linux-20.10.14
	I0601 04:26:00.744470   14580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:26:00.876577   14580 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:26:00.816467951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:26:00.952426   14580 out.go:177] * Using the docker driver based on existing profile
	I0601 04:26:00.974167   14580 start.go:284] selected driver: docker
	I0601 04:26:00.974245   14580 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601042455-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220601042455-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:26:00.974396   14580 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:26:00.977629   14580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:26:01.109165   14580 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:26:01.046549265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:26:01.109394   14580 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 04:26:01.109422   14580 cni.go:95] Creating CNI manager for ""
	I0601 04:26:01.109436   14580 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:26:01.109476   14580 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601042455-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601042455-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:26:01.131366   14580 out.go:177] * Starting control plane node default-k8s-different-port-20220601042455-2342 in cluster default-k8s-different-port-20220601042455-2342
	I0601 04:26:01.152261   14580 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:26:01.174273   14580 out.go:177] * Pulling base image ...
	I0601 04:26:01.217266   14580 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:26:01.217322   14580 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:26:01.217364   14580 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 04:26:01.217394   14580 cache.go:57] Caching tarball of preloaded images
	I0601 04:26:01.217602   14580 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 04:26:01.217630   14580 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 04:26:01.218950   14580 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/config.json ...
	I0601 04:26:01.292480   14580 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:26:01.292500   14580 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:26:01.292539   14580 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:26:01.292604   14580 start.go:352] acquiring machines lock for default-k8s-different-port-20220601042455-2342: {Name:mk23c69651775934f6906af797d469ba81c716b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:26:01.292726   14580 start.go:356] acquired machines lock for "default-k8s-different-port-20220601042455-2342" in 86.12µs
	I0601 04:26:01.292762   14580 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:26:01.292771   14580 fix.go:55] fixHost starting: 
	I0601 04:26:01.293035   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:26:01.364849   14580 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601042455-2342: state=Stopped err=<nil>
	W0601 04:26:01.364882   14580 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:26:01.386819   14580 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601042455-2342" ...
	I0601 04:26:01.408496   14580 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601042455-2342
	I0601 04:26:01.820280   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:26:01.895506   14580 kic.go:416] container "default-k8s-different-port-20220601042455-2342" state is running.
	I0601 04:26:01.896465   14580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601042455-2342
	I0601 04:26:01.978243   14580 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/config.json ...
	I0601 04:26:01.978645   14580 machine.go:88] provisioning docker machine ...
	I0601 04:26:01.978666   14580 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601042455-2342"
	I0601 04:26:01.978721   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.057577   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:02.057774   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:02.057800   14580 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601042455-2342 && echo "default-k8s-different-port-20220601042455-2342" | sudo tee /etc/hostname
	I0601 04:26:02.186984   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601042455-2342
	
	I0601 04:26:02.187091   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.266807   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:02.267070   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:02.267086   14580 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601042455-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601042455-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601042455-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:26:02.391579   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:26:02.391598   14580 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:26:02.391622   14580 ubuntu.go:177] setting up certificates
	I0601 04:26:02.391631   14580 provision.go:83] configureAuth start
	I0601 04:26:02.391694   14580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.468272   14580 provision.go:138] copyHostCerts
	I0601 04:26:02.468364   14580 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:26:02.468374   14580 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:26:02.468462   14580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:26:02.468675   14580 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:26:02.468685   14580 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:26:02.468744   14580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:26:02.468926   14580 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:26:02.468932   14580 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:26:02.468992   14580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:26:02.469123   14580 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601042455-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601042455-2342]
	I0601 04:26:02.628033   14580 provision.go:172] copyRemoteCerts
	I0601 04:26:02.628108   14580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:26:02.628154   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.702757   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:02.788602   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 04:26:02.808975   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:26:02.827761   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 04:26:02.845233   14580 provision.go:86] duration metric: configureAuth took 453.583438ms
	I0601 04:26:02.845253   14580 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:26:02.845415   14580 config.go:178] Loaded profile config "default-k8s-different-port-20220601042455-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:26:02.845498   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.918198   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:02.918337   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:02.918348   14580 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:26:03.037204   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:26:03.037224   14580 ubuntu.go:71] root file system type: overlay
	I0601 04:26:03.037352   14580 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:26:03.037443   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.111170   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:03.111313   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:03.111366   14580 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:26:03.240246   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:26:03.240328   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.313142   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:03.313309   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:03.313322   14580 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:26:03.436245   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:26:03.436261   14580 machine.go:91] provisioned docker machine in 1.457588188s
	I0601 04:26:03.436271   14580 start.go:306] post-start starting for "default-k8s-different-port-20220601042455-2342" (driver="docker")
	I0601 04:26:03.436277   14580 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:26:03.436331   14580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:26:03.436382   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.508767   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:03.596983   14580 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:26:03.600488   14580 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:26:03.600504   14580 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:26:03.600511   14580 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:26:03.600516   14580 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:26:03.600524   14580 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:26:03.600622   14580 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:26:03.600753   14580 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:26:03.600906   14580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:26:03.607953   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:26:03.625477   14580 start.go:309] post-start completed in 189.194706ms
	I0601 04:26:03.625551   14580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:26:03.625594   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.698770   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:03.782184   14580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:26:03.787963   14580 fix.go:57] fixHost completed within 2.49516055s
	I0601 04:26:03.787975   14580 start.go:81] releasing machines lock for "default-k8s-different-port-20220601042455-2342", held for 2.495210006s
	I0601 04:26:03.788048   14580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.861755   14580 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:26:03.861770   14580 ssh_runner.go:195] Run: systemctl --version
	I0601 04:26:03.861826   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.861844   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.941385   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:03.943609   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:04.159115   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:26:04.170967   14580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:26:04.180530   14580 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:26:04.180584   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:26:04.190099   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:26:04.203627   14580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:26:04.276680   14580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:26:04.346367   14580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:26:04.356550   14580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:26:04.429345   14580 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:26:04.438999   14580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:26:04.474032   14580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:26:04.561227   14580 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 04:26:04.561347   14580 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220601042455-2342 dig +short host.docker.internal
	I0601 04:26:04.700636   14580 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:26:04.700876   14580 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:26:04.705714   14580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:26:04.715234   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:04.787381   14580 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:26:04.787444   14580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:26:04.820611   14580 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 04:26:04.820628   14580 docker.go:541] Images already preloaded, skipping extraction
	I0601 04:26:04.820702   14580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:26:04.851433   14580 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 04:26:04.851452   14580 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:26:04.851529   14580 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:26:04.924277   14580 cni.go:95] Creating CNI manager for ""
	I0601 04:26:04.924289   14580 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:26:04.924340   14580 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 04:26:04.924355   14580 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601042455-2342 NodeName:default-k8s-different-port-20220601042455-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 Cgroup
Driver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:26:04.924465   14580 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20220601042455-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:26:04.924586   14580 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20220601042455-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601042455-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 04:26:04.924655   14580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 04:26:04.933113   14580 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:26:04.933164   14580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:26:04.939996   14580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0601 04:26:04.953811   14580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:26:04.966780   14580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0601 04:26:04.979301   14580 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:26:04.983155   14580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:26:04.993190   14580 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342 for IP: 192.168.49.2
	I0601 04:26:04.993351   14580 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:26:04.993405   14580 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:26:04.994010   14580 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.key
	I0601 04:26:04.994228   14580 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/apiserver.key.dd3b5fb2
	I0601 04:26:04.994339   14580 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/proxy-client.key
	I0601 04:26:04.994792   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:26:04.994838   14580 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:26:04.994852   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:26:04.994897   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:26:04.994933   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:26:04.994966   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:26:04.995036   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:26:04.995574   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:26:05.012976   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 04:26:05.029562   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:26:05.046161   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 04:26:05.064012   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:26:05.081128   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:26:05.098110   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:26:05.116584   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:26:05.134436   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:26:05.152377   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:26:05.170599   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:26:05.187918   14580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:26:05.201055   14580 ssh_runner.go:195] Run: openssl version
	I0601 04:26:05.206392   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:26:05.214051   14580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:26:05.217767   14580 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:26:05.217818   14580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:26:05.222918   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:26:05.230623   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:26:05.238341   14580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:26:05.242884   14580 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:26:05.242932   14580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:26:05.248585   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:26:05.257319   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:26:05.266343   14580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:26:05.270643   14580 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:26:05.270700   14580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:26:05.276959   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:26:05.288008   14580 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601042455-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601042455-2342
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:26:05.288113   14580 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:26:05.322070   14580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:26:05.329648   14580 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:26:05.329669   14580 kubeadm.go:626] restartCluster start
	I0601 04:26:05.329722   14580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:26:05.336276   14580 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:05.336354   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:05.410961   14580 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601042455-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:26:05.411150   14580 kubeconfig.go:127] "default-k8s-different-port-20220601042455-2342" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 04:26:05.411483   14580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:26:05.412859   14580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:26:05.420833   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:05.420896   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:05.429500   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:05.631665   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:05.631833   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:05.643018   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:05.831688   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:05.831898   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:05.843153   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.030469   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.030567   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.040665   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.231798   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.231890   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.243084   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.431676   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.431826   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.443194   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.631700   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.631935   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.642686   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.830012   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.830094   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.839242   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.029660   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.029824   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.040634   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.231729   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.231868   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.241891   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.431635   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.431790   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.442906   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.631739   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.631876   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.642010   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.831673   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.831875   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.842287   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.031773   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:08.031867   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:08.042244   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.230107   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:08.230278   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:08.240940   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.431826   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:08.431938   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:08.442260   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.442271   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:08.442320   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:08.450312   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.450323   14580 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 04:26:08.450330   14580 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:26:08.450388   14580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:26:08.480503   14580 docker.go:442] Stopping containers: [65d7be1a2882 048b1bdbb6c2 2c25ac3039ad 125e0a096cf4 ab5ecc73c373 2c18d790047c 929c1f424661 dabba0ff7c28 796713528a3d 545f113ce692 86e7f6f4c99d ee398f9c81ed a9ae0036438b f295a496a4ff 35bded318b85]
	I0601 04:26:08.480580   14580 ssh_runner.go:195] Run: docker stop 65d7be1a2882 048b1bdbb6c2 2c25ac3039ad 125e0a096cf4 ab5ecc73c373 2c18d790047c 929c1f424661 dabba0ff7c28 796713528a3d 545f113ce692 86e7f6f4c99d ee398f9c81ed a9ae0036438b f295a496a4ff 35bded318b85
	I0601 04:26:08.511573   14580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:26:08.521553   14580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:26:08.529129   14580 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  1 11:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  1 11:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  1 11:25 /etc/kubernetes/scheduler.conf
	
	I0601 04:26:08.529185   14580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 04:26:08.536247   14580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 04:26:08.543227   14580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 04:26:08.550190   14580 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.550240   14580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 04:26:08.556997   14580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 04:26:08.563897   14580 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.563944   14580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 04:26:08.570777   14580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:26:08.578228   14580 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:26:08.578236   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:08.622444   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:09.237802   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:09.364391   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:09.414114   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:09.459937   14580 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:26:09.459999   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:26:09.970965   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:26:10.470355   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:26:10.972317   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:26:11.018801   14580 api_server.go:71] duration metric: took 1.55884453s to wait for apiserver process to appear ...
	I0601 04:26:11.018822   14580 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:26:11.018837   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:26:13.577321   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 04:26:13.577342   14580 api_server.go:102] status: https://127.0.0.1:54223/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 04:26:14.079514   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:26:14.087324   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:26:14.087344   14580 api_server.go:102] status: https://127.0.0.1:54223/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:26:14.577516   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:26:14.584143   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:26:14.584160   14580 api_server.go:102] status: https://127.0.0.1:54223/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:26:15.078134   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:26:15.084601   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 200:
	ok
	I0601 04:26:15.090735   14580 api_server.go:140] control plane version: v1.23.6
	I0601 04:26:15.090746   14580 api_server.go:130] duration metric: took 4.071866633s to wait for apiserver health ...
	I0601 04:26:15.090751   14580 cni.go:95] Creating CNI manager for ""
	I0601 04:26:15.090756   14580 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:26:15.090765   14580 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:26:15.097741   14580 system_pods.go:59] 8 kube-system pods found
	I0601 04:26:15.097757   14580 system_pods.go:61] "coredns-64897985d-2cwbz" [f2ee505c-7abb-468c-b82f-0639d95d3f54] Running
	I0601 04:26:15.097764   14580 system_pods.go:61] "etcd-default-k8s-different-port-20220601042455-2342" [b259b886-9d8d-48c7-aa2a-65478e01fab5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 04:26:15.097771   14580 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601042455-2342" [34bbd902-3352-4e4b-b54d-d825aa11c98a] Running
	I0601 04:26:15.097777   14580 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601042455-2342" [efd80c45-ac3d-4e6f-81fd-e7bb51b9cffa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 04:26:15.097781   14580 system_pods.go:61] "kube-proxy-5psvf" [3d2253f1-8b8f-4db0-8081-ca96df760f01] Running
	I0601 04:26:15.097787   14580 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601042455-2342" [18d03a0a-c279-4519-aff4-0601818b2b0f] Running
	I0601 04:26:15.097792   14580 system_pods.go:61] "metrics-server-b955d9d8-cb68n" [7969f4c9-b7b6-4268-bbeb-e853689361f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:26:15.097796   14580 system_pods.go:61] "storage-provisioner" [0da4c653-9101-4891-85e8-a014384c87d8] Running
	I0601 04:26:15.097800   14580 system_pods.go:74] duration metric: took 7.031251ms to wait for pod list to return data ...
	I0601 04:26:15.097806   14580 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:26:15.100523   14580 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:26:15.100537   14580 node_conditions.go:123] node cpu capacity is 6
	I0601 04:26:15.100549   14580 node_conditions.go:105] duration metric: took 2.73238ms to run NodePressure ...
	I0601 04:26:15.100560   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:15.225479   14580 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 04:26:15.230353   14580 kubeadm.go:777] kubelet initialised
	I0601 04:26:15.230363   14580 kubeadm.go:778] duration metric: took 4.871582ms waiting for restarted kubelet to initialise ...
	I0601 04:26:15.230371   14580 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:26:15.235739   14580 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-2cwbz" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:15.240544   14580 pod_ready.go:92] pod "coredns-64897985d-2cwbz" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:15.240553   14580 pod_ready.go:81] duration metric: took 4.800313ms waiting for pod "coredns-64897985d-2cwbz" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:15.240559   14580 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:17.252022   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:19.252400   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:21.252507   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:23.752885   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:25.754820   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:27.752927   14580 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.752939   14580 pod_ready.go:81] duration metric: took 12.512215332s waiting for pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.752945   14580 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.758428   14580 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.758437   14580 pod_ready.go:81] duration metric: took 5.478741ms waiting for pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.758444   14580 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.763037   14580 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.763046   14580 pod_ready.go:81] duration metric: took 4.596913ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.763053   14580 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5psvf" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.767548   14580 pod_ready.go:92] pod "kube-proxy-5psvf" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.767557   14580 pod_ready.go:81] duration metric: took 4.499795ms waiting for pod "kube-proxy-5psvf" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.767564   14580 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.771963   14580 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.771972   14580 pod_ready.go:81] duration metric: took 4.403205ms waiting for pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.771978   14580 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:30.160100   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:32.659198   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:35.158334   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:37.159528   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:39.160149   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:41.659068   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:44.157795   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:46.658518   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:49.159038   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:51.658963   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:54.158069   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:56.158969   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:58.659942   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:00.660463   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:03.160184   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:05.659156   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:08.160717   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:10.660116   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:12.660625   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:15.160199   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:17.162541   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:19.658919   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:21.660968   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:23.661128   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:26.160934   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:28.659300   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:30.659502   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:32.660480   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:35.156691   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:37.157033   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:39.157790   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:41.659953   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:44.157489   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:46.158320   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:48.158876   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:50.159391   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:52.160809   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:54.657873   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:56.658844   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:58.660860   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:01.160686   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:03.658576   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:05.660625   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:08.158369   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:10.159184   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:12.657760   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:14.659544   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:17.158299   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:19.159216   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:21.159644   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:23.659877   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:26.159865   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:28.161266   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:30.658249   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:32.659490   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:35.158008   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:37.160518   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:39.161152   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:41.660719   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:44.157806   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:46.159192   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:48.160558   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:50.661861   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:53.158863   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:55.159591   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:57.160005   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:59.660242   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:02.159089   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:04.163195   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:06.658567   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:08.661737   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:11.160153   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:13.659262   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:16.160500   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:18.659465   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:20.660803   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:22.661147   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:25.160932   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:27.659142   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:29.661942   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:32.158831   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:34.160363   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:36.162066   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:38.660230   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:40.660953   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:43.161689   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:45.660306   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:47.662797   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:50.161558   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:52.661866   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:55.162273   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:57.162318   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:59.663120   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:02.160469   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:04.161108   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:06.161957   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:08.662446   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:11.159732   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:13.161296   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:15.162016   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:17.663004   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:20.160275   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:22.162820   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:24.659704   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:26.659992   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:28.154244   14580 pod_ready.go:81] duration metric: took 4m0.379163703s waiting for pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace to be "Ready" ...
	E0601 04:30:28.154270   14580 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 04:30:28.154377   14580 pod_ready.go:38] duration metric: took 4m12.920745187s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:30:28.154419   14580 kubeadm.go:630] restartCluster took 4m22.821363871s
	W0601 04:30:28.154538   14580 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 04:30:28.154568   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:31:06.489649   14580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.334570745s)
	I0601 04:31:06.489708   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:31:06.500019   14580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:31:06.508704   14580 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:31:06.508749   14580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:31:06.516354   14580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:31:06.516381   14580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:31:07.022688   14580 out.go:204]   - Generating certificates and keys ...
	I0601 04:31:07.547628   14580 out.go:204]   - Booting up control plane ...
	I0601 04:31:14.098649   14580 out.go:204]   - Configuring RBAC rules ...
	I0601 04:31:14.472898   14580 cni.go:95] Creating CNI manager for ""
	I0601 04:31:14.472939   14580 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:31:14.472970   14580 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 04:31:14.473040   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601042455-2342 minikube.k8s.io/updated_at=2022_06_01T04_31_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:14.473054   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:14.609357   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:14.629440   14580 ops.go:34] apiserver oom_adj: -16
	I0601 04:31:15.302635   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:15.802069   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:16.301733   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:16.802238   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:17.302310   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:17.801852   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:18.301850   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:18.801983   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:19.301780   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:19.802161   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:20.301791   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:20.801992   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:21.302891   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:21.803055   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:22.301886   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:22.802271   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:23.303324   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:23.801846   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:24.302533   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:24.802000   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:25.302196   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:25.801882   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:26.302056   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:26.801937   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:27.301872   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:27.356443   14580 kubeadm.go:1045] duration metric: took 12.883292321s to wait for elevateKubeSystemPrivileges.
	I0601 04:31:27.356475   14580 kubeadm.go:397] StartCluster complete in 5m22.064331829s
	I0601 04:31:27.356499   14580 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:31:27.356584   14580 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:31:27.357174   14580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:31:27.873007   14580 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601042455-2342" rescaled to 1
	I0601 04:31:27.873045   14580 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:31:27.873075   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:31:27.916883   14580 out.go:177] * Verifying Kubernetes components...
	I0601 04:31:27.873103   14580 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 04:31:27.873265   14580 config.go:178] Loaded profile config "default-k8s-different-port-20220601042455-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:31:27.990179   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:31:27.990194   14580 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990191   14580 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990230   14580 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990256   14580 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990270   14580 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990292   14580 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990297   14580 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220601042455-2342"
	W0601 04:31:27.990313   14580 addons.go:165] addon storage-provisioner should already be in state true
	W0601 04:31:27.990319   14580 addons.go:165] addon dashboard should already be in state true
	I0601 04:31:27.990289   14580 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220601042455-2342"
	W0601 04:31:27.990347   14580 addons.go:165] addon metrics-server should already be in state true
	I0601 04:31:27.990402   14580 host.go:66] Checking if "default-k8s-different-port-20220601042455-2342" exists ...
	I0601 04:31:27.990406   14580 host.go:66] Checking if "default-k8s-different-port-20220601042455-2342" exists ...
	I0601 04:31:27.990546   14580 host.go:66] Checking if "default-k8s-different-port-20220601042455-2342" exists ...
	I0601 04:31:27.991110   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:27.991161   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:27.991193   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:27.991205   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:28.005334   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 04:31:28.019407   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.119407   14580 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:28.160009   14580 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0601 04:31:28.160022   14580 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:31:28.139235   14580 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 04:31:28.160061   14580 host.go:66] Checking if "default-k8s-different-port-20220601042455-2342" exists ...
	I0601 04:31:28.181420   14580 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:31:28.182270   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:28.222903   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:31:28.202022   14580 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 04:31:28.213807   14580 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601042455-2342" to be "Ready" ...
	I0601 04:31:28.222906   14580 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 04:31:28.222991   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.244155   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 04:31:28.244317   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.265065   14580 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 04:31:28.248658   14580 node_ready.go:49] node "default-k8s-different-port-20220601042455-2342" has status "Ready":"True"
	I0601 04:31:28.286018   14580 node_ready.go:38] duration metric: took 41.900602ms waiting for node "default-k8s-different-port-20220601042455-2342" to be "Ready" ...
	I0601 04:31:28.286061   14580 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:31:28.286111   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 04:31:28.286143   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 04:31:28.286305   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.301339   14580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-8p4v4" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:28.323212   14580 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:31:28.323230   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:31:28.323329   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.353482   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:31:28.378157   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:31:28.401554   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:31:28.425239   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:31:28.501564   14580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:31:28.598323   14580 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 04:31:28.598337   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 04:31:28.606270   14580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:31:28.608473   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 04:31:28.608492   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 04:31:28.690725   14580 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 04:31:28.690745   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 04:31:28.702169   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 04:31:28.702196   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 04:31:28.793453   14580 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:31:28.793479   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 04:31:28.799106   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 04:31:28.799130   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 04:31:28.885461   14580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:31:28.897510   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 04:31:28.897524   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 04:31:28.918188   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 04:31:28.918205   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 04:31:29.002904   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 04:31:29.002919   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 04:31:29.117220   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 04:31:29.117235   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 04:31:29.201285   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 04:31:29.201302   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 04:31:29.287703   14580 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.282311049s)
	I0601 04:31:29.287729   14580 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 04:31:29.291304   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:31:29.291322   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 04:31:29.392031   14580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:31:29.727831   14580 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:29.886057   14580 pod_ready.go:92] pod "coredns-64897985d-8p4v4" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:29.886074   14580 pod_ready.go:81] duration metric: took 1.584692102s waiting for pod "coredns-64897985d-8p4v4" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:29.886087   14580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-cb9n8" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:30.725108   14580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.333022241s)
	I0601 04:31:30.806402   14580 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 04:31:30.843522   14580 addons.go:417] enableAddons completed in 2.970381543s
	I0601 04:31:31.905757   14580 pod_ready.go:102] pod "coredns-64897985d-cb9n8" in "kube-system" namespace has status "Ready":"False"
	I0601 04:31:32.905434   14580 pod_ready.go:92] pod "coredns-64897985d-cb9n8" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.905450   14580 pod_ready.go:81] duration metric: took 3.019316909s waiting for pod "coredns-64897985d-cb9n8" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.905457   14580 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.914545   14580 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.914568   14580 pod_ready.go:81] duration metric: took 9.084073ms waiting for pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.914583   14580 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.926766   14580 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.926777   14580 pod_ready.go:81] duration metric: took 12.185589ms waiting for pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.926785   14580 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.936236   14580 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.936249   14580 pod_ready.go:81] duration metric: took 9.458235ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.936261   14580 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p7tsj" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.982358   14580 pod_ready.go:92] pod "kube-proxy-p7tsj" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.982376   14580 pod_ready.go:81] duration metric: took 46.107821ms waiting for pod "kube-proxy-p7tsj" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.982388   14580 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:33.300851   14580 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:33.300861   14580 pod_ready.go:81] duration metric: took 318.462177ms waiting for pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:33.300867   14580 pod_ready.go:38] duration metric: took 5.014691974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:31:33.300883   14580 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:31:33.300930   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:31:33.312670   14580 api_server.go:71] duration metric: took 5.439538065s to wait for apiserver process to appear ...
	I0601 04:31:33.312684   14580 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:31:33.312690   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:31:33.318481   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 200:
	ok
	I0601 04:31:33.319652   14580 api_server.go:140] control plane version: v1.23.6
	I0601 04:31:33.319662   14580 api_server.go:130] duration metric: took 6.974325ms to wait for apiserver health ...
	I0601 04:31:33.319668   14580 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:31:33.503644   14580 system_pods.go:59] 9 kube-system pods found
	I0601 04:31:33.503658   14580 system_pods.go:61] "coredns-64897985d-8p4v4" [ae0cb737-4e73-40a0-b7ca-c5fb35908ad9] Running
	I0601 04:31:33.503664   14580 system_pods.go:61] "coredns-64897985d-cb9n8" [0b71bc2a-d0ac-4d4d-9420-1422f088b267] Running
	I0601 04:31:33.503672   14580 system_pods.go:61] "etcd-default-k8s-different-port-20220601042455-2342" [d64e3142-a5a3-438a-b1dd-f8fda41cf500] Running
	I0601 04:31:33.503684   14580 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601042455-2342" [e7ebee32-6122-4fd0-8e7a-26d16cf09fd5] Running
	I0601 04:31:33.503691   14580 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601042455-2342" [736247dc-e330-4d49-a9b4-38e9f4bf2f55] Running
	I0601 04:31:33.503697   14580 system_pods.go:61] "kube-proxy-p7tsj" [4a00e2b2-3357-4d45-812e-b96583883072] Running
	I0601 04:31:33.503708   14580 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601042455-2342" [547e2d90-4aa4-4ffa-8227-7a87069bc624] Running
	I0601 04:31:33.503718   14580 system_pods.go:61] "metrics-server-b955d9d8-vqpwl" [53aca426-4c43-4abd-bbb9-ca59d11ca961] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:31:33.503726   14580 system_pods.go:61] "storage-provisioner" [eb46d9b1-266a-406d-bfa9-384a28696367] Running
	I0601 04:31:33.503737   14580 system_pods.go:74] duration metric: took 184.060787ms to wait for pod list to return data ...
	I0601 04:31:33.503746   14580 default_sa.go:34] waiting for default service account to be created ...
	I0601 04:31:33.700368   14580 default_sa.go:45] found service account: "default"
	I0601 04:31:33.700381   14580 default_sa.go:55] duration metric: took 196.626716ms for default service account to be created ...
	I0601 04:31:33.700386   14580 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 04:31:33.904017   14580 system_pods.go:86] 9 kube-system pods found
	I0601 04:31:33.904032   14580 system_pods.go:89] "coredns-64897985d-8p4v4" [ae0cb737-4e73-40a0-b7ca-c5fb35908ad9] Running
	I0601 04:31:33.904036   14580 system_pods.go:89] "coredns-64897985d-cb9n8" [0b71bc2a-d0ac-4d4d-9420-1422f088b267] Running
	I0601 04:31:33.904040   14580 system_pods.go:89] "etcd-default-k8s-different-port-20220601042455-2342" [d64e3142-a5a3-438a-b1dd-f8fda41cf500] Running
	I0601 04:31:33.904050   14580 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220601042455-2342" [e7ebee32-6122-4fd0-8e7a-26d16cf09fd5] Running
	I0601 04:31:33.904056   14580 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220601042455-2342" [736247dc-e330-4d49-a9b4-38e9f4bf2f55] Running
	I0601 04:31:33.904060   14580 system_pods.go:89] "kube-proxy-p7tsj" [4a00e2b2-3357-4d45-812e-b96583883072] Running
	I0601 04:31:33.904064   14580 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220601042455-2342" [547e2d90-4aa4-4ffa-8227-7a87069bc624] Running
	I0601 04:31:33.904069   14580 system_pods.go:89] "metrics-server-b955d9d8-vqpwl" [53aca426-4c43-4abd-bbb9-ca59d11ca961] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:31:33.904073   14580 system_pods.go:89] "storage-provisioner" [eb46d9b1-266a-406d-bfa9-384a28696367] Running
	I0601 04:31:33.904079   14580 system_pods.go:126] duration metric: took 203.685319ms to wait for k8s-apps to be running ...
	I0601 04:31:33.904101   14580 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 04:31:33.904156   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:31:33.916392   14580 system_svc.go:56] duration metric: took 12.281443ms WaitForService to wait for kubelet.
	I0601 04:31:33.916408   14580 kubeadm.go:572] duration metric: took 6.043269745s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 04:31:33.916426   14580 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:31:34.101016   14580 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:31:34.101029   14580 node_conditions.go:123] node cpu capacity is 6
	I0601 04:31:34.101041   14580 node_conditions.go:105] duration metric: took 184.609149ms to run NodePressure ...
	I0601 04:31:34.101051   14580 start.go:213] waiting for startup goroutines ...
	I0601 04:31:34.134136   14580 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 04:31:34.156421   14580 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220601042455-2342" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:14:29 UTC, end at Wed 2022-06-01 11:32:13 UTC. --
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 systemd[1]: Starting Docker Application Container Engine...
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.661521825Z" level=info msg="Starting up"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.663342504Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.663395200Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.663411000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.663419036Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.664701040Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.664730081Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.664742618Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.664754909Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.669344312Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.673789964Z" level=info msg="Loading containers: start."
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.759102419Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.791878604Z" level=info msg="Loading containers: done."
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.800298543Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.800366770Z" level=info msg="Daemon has completed initialization"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 systemd[1]: Started Docker Application Container Engine.
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.826706081Z" level=info msg="API listen on [::]:2376"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.829430983Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-06-01T11:32:15Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  11:32:15 up  1:12,  0 users,  load average: 0.76, 0.55, 0.71
	Linux old-k8s-version-20220601040844-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:14:29 UTC, end at Wed 2022-06-01 11:32:16 UTC. --
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 927.
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 kubelet[24508]: I0601 11:32:14.827080   24508 server.go:410] Version: v1.16.0
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 kubelet[24508]: I0601 11:32:14.827724   24508 plugins.go:100] No cloud provider specified.
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 kubelet[24508]: I0601 11:32:14.827758   24508 server.go:773] Client rotation is on, will bootstrap in background
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 kubelet[24508]: I0601 11:32:14.829545   24508 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 kubelet[24508]: W0601 11:32:14.830196   24508 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 kubelet[24508]: W0601 11:32:14.830260   24508 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 kubelet[24508]: F0601 11:32:14.830284   24508 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 01 11:32:14 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 01 11:32:15 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Jun 01 11:32:15 old-k8s-version-20220601040844-2342 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 11:32:15 old-k8s-version-20220601040844-2342 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 01 11:32:15 old-k8s-version-20220601040844-2342 kubelet[24520]: I0601 11:32:15.550225   24520 server.go:410] Version: v1.16.0
	Jun 01 11:32:15 old-k8s-version-20220601040844-2342 kubelet[24520]: I0601 11:32:15.550574   24520 plugins.go:100] No cloud provider specified.
	Jun 01 11:32:15 old-k8s-version-20220601040844-2342 kubelet[24520]: I0601 11:32:15.550634   24520 server.go:773] Client rotation is on, will bootstrap in background
	Jun 01 11:32:15 old-k8s-version-20220601040844-2342 kubelet[24520]: I0601 11:32:15.552353   24520 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 01 11:32:15 old-k8s-version-20220601040844-2342 kubelet[24520]: W0601 11:32:15.553063   24520 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 01 11:32:15 old-k8s-version-20220601040844-2342 kubelet[24520]: W0601 11:32:15.553230   24520 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 01 11:32:15 old-k8s-version-20220601040844-2342 kubelet[24520]: F0601 11:32:15.553308   24520 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 01 11:32:15 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 01 11:32:15 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 04:32:15.765330   14824 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 2 (459.609155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220601040844-2342" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (44.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220601041659-2342 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342: exit status 2 (16.12401665s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342: exit status 2 (16.11034412s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220601041659-2342 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-darwin-amd64 unpause -p no-preload-20220601041659-2342 --alsologtostderr -v=1: (1.156093218s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601041659-2342
helpers_test.go:235: (dbg) docker inspect no-preload-20220601041659-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a0a7f863fcaae7c8775c1087db056361b49157a3c3ea7f3bbb3d26debd94b45",
	        "Created": "2022-06-01T11:17:01.899467855Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 229382,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:18:16.054818263Z",
	            "FinishedAt": "2022-06-01T11:18:14.113950323Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/3a0a7f863fcaae7c8775c1087db056361b49157a3c3ea7f3bbb3d26debd94b45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a0a7f863fcaae7c8775c1087db056361b49157a3c3ea7f3bbb3d26debd94b45/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a0a7f863fcaae7c8775c1087db056361b49157a3c3ea7f3bbb3d26debd94b45/hosts",
	        "LogPath": "/var/lib/docker/containers/3a0a7f863fcaae7c8775c1087db056361b49157a3c3ea7f3bbb3d26debd94b45/3a0a7f863fcaae7c8775c1087db056361b49157a3c3ea7f3bbb3d26debd94b45-json.log",
	        "Name": "/no-preload-20220601041659-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220601041659-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220601041659-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/03ec63b0e0b366eaac77a3af0e76c22dd274ee509cf51c2f95b155b864c534a2-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03ec63b0e0b366eaac77a3af0e76c22dd274ee509cf51c2f95b155b864c534a2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03ec63b0e0b366eaac77a3af0e76c22dd274ee509cf51c2f95b155b864c534a2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03ec63b0e0b366eaac77a3af0e76c22dd274ee509cf51c2f95b155b864c534a2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220601041659-2342",
	                "Source": "/var/lib/docker/volumes/no-preload-20220601041659-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220601041659-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220601041659-2342",
	                "name.minikube.sigs.k8s.io": "no-preload-20220601041659-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79429c5e2d9174bd85f397e0c1392083863399f885d3240ae7ca6278b00efb76",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53165"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53166"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53162"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/79429c5e2d91",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220601041659-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3a0a7f863fca",
	                        "no-preload-20220601041659-2342"
	                    ],
	                    "NetworkID": "a99de4b0c2de36bb282e54aada7d2e4017796d0c8751cdbcbb3a530355a143b4",
	                    "EndpointID": "a5d53691e7b0ad947beeea84b395d671ed663b5e3f0c49ea7480e20c2ecac4c2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220601041659-2342 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220601041659-2342 logs -n 25: (2.75565324s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p kubenet-20220601035306-2342                    | kubenet-20220601035306-2342               | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	| delete  | -p                                                | disable-driver-mounts-20220601040914-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	|         | disable-driver-mounts-20220601040914-2342         |                                           |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220601040844-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:14 PDT | 01 Jun 22 04:14 PDT |
	|         | old-k8s-version-20220601040844-2342               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220601040844-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:14 PDT | 01 Jun 22 04:14 PDT |
	|         | old-k8s-version-20220601040844-2342               |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:15 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| logs    | embed-certs-20220601040915-2342                   | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| logs    | embed-certs-20220601040915-2342                   | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:17 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| logs    | old-k8s-version-20220601040844-2342               | old-k8s-version-20220601040844-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:22 PDT | 01 Jun 22 04:22 PDT |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:23 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:18:14
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:18:14.774878   14036 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:18:14.775106   14036 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:18:14.775111   14036 out.go:309] Setting ErrFile to fd 2...
	I0601 04:18:14.775115   14036 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:18:14.775218   14036 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:18:14.775473   14036 out.go:303] Setting JSON to false
	I0601 04:18:14.790201   14036 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":4664,"bootTime":1654077630,"procs":351,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:18:14.790325   14036 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:18:14.812835   14036 out.go:177] * [no-preload-20220601041659-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:18:14.855264   14036 notify.go:193] Checking for updates...
	I0601 04:18:14.877395   14036 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:18:14.899376   14036 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:18:14.921165   14036 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:18:14.942588   14036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:18:14.964495   14036 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:18:14.986858   14036 config.go:178] Loaded profile config "no-preload-20220601041659-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:18:14.987481   14036 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:18:15.060132   14036 docker.go:137] docker version: linux-20.10.14
	I0601 04:18:15.060314   14036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:18:15.195804   14036 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:18:15.147757293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:18:15.239675   14036 out.go:177] * Using the docker driver based on existing profile
	I0601 04:18:15.261306   14036 start.go:284] selected driver: docker
	I0601 04:18:15.261319   14036 start.go:806] validating driver "docker" against &{Name:no-preload-20220601041659-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601041659-2342 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Scheduled
Stop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:18:15.261387   14036 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:18:15.263519   14036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:18:15.389107   14036 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:18:15.340671415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:18:15.389294   14036 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 04:18:15.389312   14036 cni.go:95] Creating CNI manager for ""
	I0601 04:18:15.389320   14036 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:18:15.389327   14036 start_flags.go:306] config:
	{Name:no-preload-20220601041659-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601041659-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:18:15.411303   14036 out.go:177] * Starting control plane node no-preload-20220601041659-2342 in cluster no-preload-20220601041659-2342
	I0601 04:18:15.432174   14036 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:18:15.453963   14036 out.go:177] * Pulling base image ...
	I0601 04:18:15.496035   14036 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:18:15.496046   14036 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:18:15.496172   14036 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/config.json ...
	I0601 04:18:15.496274   14036 cache.go:107] acquiring lock: {Name:mk3e9a6bf873842d2e5ca428e419405f67698986 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.496248   14036 cache.go:107] acquiring lock: {Name:mk6cdcb3277425415932624173a7b7ca3460ec43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497016   14036 cache.go:107] acquiring lock: {Name:mk5aea169468c70908c7500bcfea18f2c75c6bec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497323   14036 cache.go:107] acquiring lock: {Name:mk0ce8763eede5207a594beee88851a0e339bc7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497362   14036 cache.go:107] acquiring lock: {Name:mk735d5a3617189a069af22bcee4c9a1653c60c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497430   14036 cache.go:107] acquiring lock: {Name:mkbce65c6aa4c06171eeb95b8350c15ff2252191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497532   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0601 04:18:15.497461   14036 cache.go:107] acquiring lock: {Name:mkc7860c5e3d5dd07d6a0cd1126cb14b20ddb5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497556   14036 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 1.295562ms
	I0601 04:18:15.497574   14036 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0601 04:18:15.497985   14036 cache.go:107] acquiring lock: {Name:mk2917ee5d109fb25f09b3f463d8b7c0891736eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497986   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 exists
	I0601 04:18:15.498016   14036 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6" took 1.771839ms
	I0601 04:18:15.498038   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0601 04:18:15.498038   14036 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 succeeded
	I0601 04:18:15.498057   14036 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.82584ms
	I0601 04:18:15.498071   14036 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0601 04:18:15.498155   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 exists
	I0601 04:18:15.498156   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 exists
	I0601 04:18:15.498158   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0601 04:18:15.498169   14036 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6" took 1.113538ms
	I0601 04:18:15.498179   14036 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 succeeded
	I0601 04:18:15.498177   14036 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6" took 1.19488ms
	I0601 04:18:15.498188   14036 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 succeeded
	I0601 04:18:15.498180   14036 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 1.089655ms
	I0601 04:18:15.498208   14036 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0601 04:18:15.498227   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 exists
	I0601 04:18:15.498223   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0601 04:18:15.498233   14036 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6" took 896.28µs
	I0601 04:18:15.498240   14036 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 1.200632ms
	I0601 04:18:15.498243   14036 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 succeeded
	I0601 04:18:15.498248   14036 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0601 04:18:15.498261   14036 cache.go:87] Successfully saved all images to host disk.
	I0601 04:18:15.562486   14036 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:18:15.562503   14036 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:18:15.562514   14036 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:18:15.562561   14036 start.go:352] acquiring machines lock for no-preload-20220601041659-2342: {Name:mk58caff34cdda9e203618eaf8e1336a225589ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.562634   14036 start.go:356] acquired machines lock for "no-preload-20220601041659-2342" in 62.594µs
	I0601 04:18:15.562660   14036 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:18:15.562670   14036 fix.go:55] fixHost starting: 
	I0601 04:18:15.562891   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:18:15.632241   14036 fix.go:103] recreateIfNeeded on no-preload-20220601041659-2342: state=Stopped err=<nil>
	W0601 04:18:15.632274   14036 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:18:15.654231   14036 out.go:177] * Restarting existing docker container for "no-preload-20220601041659-2342" ...
	I0601 04:18:15.403723   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:15.481185   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:15.512966   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.512977   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:15.513037   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:15.544508   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.544521   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:15.544567   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:15.581483   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.581494   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:15.581555   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:15.613508   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.613522   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:15.613578   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:15.645122   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.645148   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:15.645206   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:15.675331   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.675344   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:15.675397   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:15.706115   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.706144   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:15.706233   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:15.738582   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.738596   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:15.738604   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:15.738612   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:15.803326   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:15.803338   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:15.803345   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:15.821038   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:15.821061   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:17.883284   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06218452s)
	I0601 04:18:17.883398   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:17.883406   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:17.927628   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:17.927643   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:15.696056   14036 cli_runner.go:164] Run: docker start no-preload-20220601041659-2342
	I0601 04:18:16.064616   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:18:16.139380   14036 kic.go:416] container "no-preload-20220601041659-2342" state is running.
	I0601 04:18:16.140133   14036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220601041659-2342
	I0601 04:18:16.223932   14036 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/config.json ...
	I0601 04:18:16.224334   14036 machine.go:88] provisioning docker machine ...
	I0601 04:18:16.224356   14036 ubuntu.go:169] provisioning hostname "no-preload-20220601041659-2342"
	I0601 04:18:16.224451   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:16.304277   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:16.304462   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:16.304479   14036 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220601041659-2342 && echo "no-preload-20220601041659-2342" | sudo tee /etc/hostname
	I0601 04:18:16.433244   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220601041659-2342
	
	I0601 04:18:16.433319   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:16.508502   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:16.508694   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:16.508709   14036 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220601041659-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220601041659-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220601041659-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:18:16.629821   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:18:16.629881   14036 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:18:16.629920   14036 ubuntu.go:177] setting up certificates
	I0601 04:18:16.629931   14036 provision.go:83] configureAuth start
	I0601 04:18:16.630007   14036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220601041659-2342
	I0601 04:18:16.783851   14036 provision.go:138] copyHostCerts
	I0601 04:18:16.783934   14036 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:18:16.783942   14036 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:18:16.784048   14036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:18:16.784282   14036 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:18:16.784290   14036 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:18:16.784348   14036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:18:16.784513   14036 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:18:16.784521   14036 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:18:16.784583   14036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:18:16.784734   14036 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220601041659-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220601041659-2342]
	I0601 04:18:16.853835   14036 provision.go:172] copyRemoteCerts
	I0601 04:18:16.853899   14036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:18:16.853944   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:16.930312   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:17.016327   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 04:18:17.035298   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 04:18:17.053766   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:18:17.073791   14036 provision.go:86] duration metric: configureAuth took 443.84114ms
	I0601 04:18:17.073803   14036 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:18:17.073938   14036 config.go:178] Loaded profile config "no-preload-20220601041659-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:18:17.073997   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.145242   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:17.145411   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:17.145424   14036 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:18:17.264247   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:18:17.264260   14036 ubuntu.go:71] root file system type: overlay
	I0601 04:18:17.264374   14036 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:18:17.264440   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.337947   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:17.338110   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:17.338170   14036 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:18:17.464122   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:18:17.464206   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.535287   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:17.535439   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:17.535452   14036 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:18:17.656716   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:18:17.656732   14036 machine.go:91] provisioned docker machine in 1.432374179s
	I0601 04:18:17.656739   14036 start.go:306] post-start starting for "no-preload-20220601041659-2342" (driver="docker")
	I0601 04:18:17.656743   14036 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:18:17.656811   14036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:18:17.656865   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.727465   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:17.821284   14036 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:18:17.825772   14036 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:18:17.825789   14036 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:18:17.825804   14036 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:18:17.825812   14036 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:18:17.825821   14036 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:18:17.825928   14036 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:18:17.826061   14036 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:18:17.826225   14036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:18:17.833508   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:18:17.850697   14036 start.go:309] post-start completed in 193.939485ms
	I0601 04:18:17.850781   14036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:18:17.850824   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.926433   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:18.009814   14036 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:18:18.014092   14036 fix.go:57] fixHost completed within 2.451394081s
	I0601 04:18:18.014102   14036 start.go:81] releasing machines lock for "no-preload-20220601041659-2342", held for 2.451434151s
	I0601 04:18:18.014172   14036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220601041659-2342
	I0601 04:18:18.086484   14036 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:18:18.086486   14036 ssh_runner.go:195] Run: systemctl --version
	I0601 04:18:18.086599   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:18.086599   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:18.167367   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:18.170314   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:18.252773   14036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:18:18.385785   14036 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:18:18.396076   14036 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:18:18.396130   14036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:18:18.405470   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:18:18.418490   14036 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:18:18.486961   14036 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:18:18.553762   14036 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:18:18.563812   14036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:18:18.628982   14036 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:18:18.638220   14036 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:18:18.674484   14036 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:18:18.753443   14036 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 04:18:18.753615   14036 cli_runner.go:164] Run: docker exec -t no-preload-20220601041659-2342 dig +short host.docker.internal
	I0601 04:18:18.881761   14036 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:18:18.881851   14036 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:18:18.886028   14036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:18:18.895724   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:18.966314   14036 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:18:18.966372   14036 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:18:18.998564   14036 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 04:18:18.998580   14036 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:18:18.998654   14036 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:18:19.075561   14036 cni.go:95] Creating CNI manager for ""
	I0601 04:18:19.075576   14036 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:18:19.075610   14036 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 04:18:19.075634   14036 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220601041659-2342 NodeName:no-preload-20220601041659-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:18:19.075750   14036 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "no-preload-20220601041659-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:18:19.075828   14036 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=no-preload-20220601041659-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601041659-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 04:18:19.075885   14036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 04:18:19.083584   14036 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:18:19.083642   14036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:18:19.090501   14036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (356 bytes)
	I0601 04:18:19.102643   14036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:18:19.114867   14036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes)
	I0601 04:18:19.127704   14036 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:18:19.131229   14036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:18:19.140635   14036 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342 for IP: 192.168.49.2
	I0601 04:18:19.140743   14036 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:18:19.140794   14036 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:18:19.140880   14036 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.key
	I0601 04:18:19.140951   14036 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/apiserver.key.dd3b5fb2
	I0601 04:18:19.141000   14036 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/proxy-client.key
	I0601 04:18:19.141188   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:18:19.141229   14036 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:18:19.141241   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:18:19.141271   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:18:19.141304   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:18:19.141334   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:18:19.141394   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:18:19.141961   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:18:19.159226   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 04:18:19.175596   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:18:19.192259   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 04:18:19.210061   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:18:19.226574   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:18:19.243363   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:18:19.260116   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:18:19.277176   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:18:19.293972   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:18:19.310746   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:18:19.327971   14036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:18:19.340461   14036 ssh_runner.go:195] Run: openssl version
	I0601 04:18:19.345544   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:18:19.353245   14036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:18:19.357005   14036 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:18:19.357044   14036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:18:19.361993   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:18:19.369033   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:18:19.376927   14036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:18:19.380775   14036 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:18:19.380813   14036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:18:19.385890   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:18:19.392971   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:18:19.400951   14036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:18:19.405006   14036 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:18:19.405058   14036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:18:19.410775   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:18:19.418340   14036 kubeadm.go:395] StartCluster: {Name:no-preload-20220601041659-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601041659-2342 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:18:19.418443   14036 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:18:19.448862   14036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:18:19.456368   14036 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:18:19.456381   14036 kubeadm.go:626] restartCluster start
	I0601 04:18:19.456423   14036 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:18:19.463881   14036 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:19.463941   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:19.568912   14036 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220601041659-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:18:19.569088   14036 kubeconfig.go:127] "no-preload-20220601041659-2342" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 04:18:19.569472   14036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:18:19.570824   14036 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:18:19.578849   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:19.578930   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:19.587781   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.440923   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:20.483332   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:20.514761   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.514773   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:20.514833   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:20.546039   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.546053   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:20.546108   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:20.575400   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.575414   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:20.575469   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:20.606603   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.606617   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:20.606680   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:20.635837   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.635849   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:20.635906   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:20.666144   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.666157   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:20.666211   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:20.694854   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.694866   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:20.694924   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:20.725318   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.725331   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:20.725338   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:20.725345   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:20.778767   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:20.778778   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:20.778785   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:20.790876   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:20.790888   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:22.843261   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05233999s)
	I0601 04:18:22.843425   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:22.843432   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:22.886071   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:22.886084   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:19.789971   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:19.799683   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:19.810034   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:19.990006   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:19.990226   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.000773   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.190022   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.190234   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.201267   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.387932   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.388133   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.399100   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.588039   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.588104   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.597935   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.788903   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.788959   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.798457   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.990036   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.990239   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.000901   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.189970   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.190109   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.202580   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.390048   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.390138   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.402774   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.590024   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.590200   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.601412   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.787917   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.787977   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.797125   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.990025   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.990212   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.001077   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.190009   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:22.190214   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.201491   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.390189   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:22.390292   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.401348   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.588348   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:22.588437   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.597421   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.597432   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:22.597484   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.605651   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.605661   14036 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 04:18:22.605669   14036 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:18:22.605721   14036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:18:22.637821   14036 docker.go:442] Stopping containers: [f3ab122c826b 1734d7965330 83819780dd93 b9768112bb8b 84611d08ab8e 48c8256f94d6 651e4e6fd977 4702c401989d f48f3e09df46 e1fc171fe8aa cd2a23f7c38c 85e4aa0cd1f6 030ece384801 b93e15c9f0f8 03abb63ba5d1 f241878ca7d9]
	I0601 04:18:22.637899   14036 ssh_runner.go:195] Run: docker stop f3ab122c826b 1734d7965330 83819780dd93 b9768112bb8b 84611d08ab8e 48c8256f94d6 651e4e6fd977 4702c401989d f48f3e09df46 e1fc171fe8aa cd2a23f7c38c 85e4aa0cd1f6 030ece384801 b93e15c9f0f8 03abb63ba5d1 f241878ca7d9
	I0601 04:18:22.668131   14036 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:18:22.678704   14036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:18:22.686703   14036 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Jun  1 11:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  1 11:17 /etc/kubernetes/scheduler.conf
	
	I0601 04:18:22.686754   14036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 04:18:22.694474   14036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 04:18:22.701738   14036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 04:18:22.708977   14036 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.709021   14036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 04:18:22.716105   14036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 04:18:22.723106   14036 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.723152   14036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 04:18:22.729915   14036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:18:22.737243   14036 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:18:22.737252   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:22.785192   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:23.493831   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:23.616417   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:23.664597   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:23.714856   14036 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:18:23.714918   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:24.224606   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:24.724651   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:24.741543   14036 api_server.go:71] duration metric: took 1.026678054s to wait for apiserver process to appear ...
	I0601 04:18:24.741573   14036 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:18:24.741609   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:24.743165   14036 api_server.go:256] stopped: https://127.0.0.1:53162/healthz: Get "https://127.0.0.1:53162/healthz": EOF
	I0601 04:18:25.399324   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:25.481380   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:25.515313   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.515325   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:25.515385   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:25.546864   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.546877   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:25.546942   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:25.582431   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.582445   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:25.582503   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:25.622691   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.622704   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:25.622766   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:25.654669   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.654682   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:25.654738   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:25.685692   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.685706   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:25.685765   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:25.719896   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.719910   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:25.719974   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:25.755042   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.755058   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:25.755066   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:25.755074   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:25.815872   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:25.815883   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:25.815891   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:25.829154   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:25.829166   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:27.888157   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058957128s)
	I0601 04:18:27.888265   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:27.888293   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:27.929491   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:27.929508   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:25.243670   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:27.652029   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 04:18:27.652045   14036 api_server.go:102] status: https://127.0.0.1:53162/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 04:18:27.743312   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:27.749868   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:18:27.749888   14036 api_server.go:102] status: https://127.0.0.1:53162/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:18:28.243386   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:28.250986   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:18:28.250999   14036 api_server.go:102] status: https://127.0.0.1:53162/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:18:28.743315   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:28.749565   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:18:28.749583   14036 api_server.go:102] status: https://127.0.0.1:53162/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:18:29.243324   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:29.249665   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 200:
	ok
	I0601 04:18:29.256790   14036 api_server.go:140] control plane version: v1.23.6
	I0601 04:18:29.256806   14036 api_server.go:130] duration metric: took 4.515177636s to wait for apiserver health ...
	I0601 04:18:29.256812   14036 cni.go:95] Creating CNI manager for ""
	I0601 04:18:29.256817   14036 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:18:29.256824   14036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:18:29.265857   14036 system_pods.go:59] 8 kube-system pods found
	I0601 04:18:29.265875   14036 system_pods.go:61] "coredns-64897985d-89vc5" [95167d56-5dd4-4982-a6ca-86bb2e4620e3] Running
	I0601 04:18:29.265879   14036 system_pods.go:61] "etcd-no-preload-20220601041659-2342" [41190448-255a-49e9-b1e9-8ea601ad0843] Running
	I0601 04:18:29.265884   14036 system_pods.go:61] "kube-apiserver-no-preload-20220601041659-2342" [68c306bb-05ab-46ec-a523-865fe75e873a] Running
	I0601 04:18:29.265893   14036 system_pods.go:61] "kube-controller-manager-no-preload-20220601041659-2342" [e54984b5-ad07-42c7-8adc-e3d945a55efe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 04:18:29.265898   14036 system_pods.go:61] "kube-proxy-fgsgh" [bdfa1c31-6750-4343-b15b-08de66100496] Running
	I0601 04:18:29.265903   14036 system_pods.go:61] "kube-scheduler-no-preload-20220601041659-2342" [5e7b361b-cc2a-420b-83d2-0f0710b6dbd4] Running
	I0601 04:18:29.265908   14036 system_pods.go:61] "metrics-server-b955d9d8-64p54" [75ee83a8-d23f-44d3-ad4a-370743a2a88d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:18:29.265915   14036 system_pods.go:61] "storage-provisioner" [401f203f-92b1-4ae2-a59c-19909e579b9a] Running
	I0601 04:18:29.265919   14036 system_pods.go:74] duration metric: took 9.090386ms to wait for pod list to return data ...
	I0601 04:18:29.265926   14036 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:18:29.268895   14036 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:18:29.268910   14036 node_conditions.go:123] node cpu capacity is 6
	I0601 04:18:29.268921   14036 node_conditions.go:105] duration metric: took 2.991293ms to run NodePressure ...
	I0601 04:18:29.268932   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:29.533767   14036 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 04:18:29.539069   14036 kubeadm.go:777] kubelet initialised
	I0601 04:18:29.539090   14036 kubeadm.go:778] duration metric: took 5.30228ms waiting for restarted kubelet to initialise ...
	I0601 04:18:29.539104   14036 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:18:29.544448   14036 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-89vc5" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.550983   14036 pod_ready.go:92] pod "coredns-64897985d-89vc5" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:29.550993   14036 pod_ready.go:81] duration metric: took 6.531028ms waiting for pod "coredns-64897985d-89vc5" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.550999   14036 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.596543   14036 pod_ready.go:92] pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:29.596558   14036 pod_ready.go:81] duration metric: took 45.552599ms waiting for pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.596566   14036 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.603596   14036 pod_ready.go:92] pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:29.603609   14036 pod_ready.go:81] duration metric: took 7.03783ms waiting for pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.603621   14036 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:30.444730   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:30.481478   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:30.511666   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.511679   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:30.511732   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:30.542700   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.542715   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:30.542772   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:30.572035   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.572047   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:30.572104   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:30.603167   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.603179   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:30.603238   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:30.632389   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.632402   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:30.632456   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:30.660425   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.660437   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:30.660494   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:30.692427   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.692440   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:30.692498   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:30.721182   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.721194   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:30.721201   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:30.721209   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:30.763615   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:30.763627   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:30.779090   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:30.779105   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:30.837839   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:30.837850   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:30.837857   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:30.851365   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:30.851379   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:32.907858   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056443603s)
	I0601 04:18:31.668923   14036 pod_ready.go:102] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:34.169366   14036 pod_ready.go:102] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:35.408111   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:35.483017   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:35.513087   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.513099   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:35.513153   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:35.541148   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.541161   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:35.541222   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:35.569639   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.569652   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:35.569708   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:35.599189   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.599201   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:35.599254   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:35.628983   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.628995   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:35.629052   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:35.658557   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.658569   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:35.658623   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:35.691031   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.691058   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:35.691174   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:35.721259   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.721271   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:35.721277   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:35.721284   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:35.733301   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:35.733315   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:35.785853   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:35.785866   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:35.785872   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:35.799604   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:35.799616   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:37.856133   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056481616s)
	I0601 04:18:37.856244   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:37.856250   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:36.666864   14036 pod_ready.go:102] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:38.669699   14036 pod_ready.go:102] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:40.397963   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:40.408789   13556 kubeadm.go:630] restartCluster took 4m7.458583962s
	W0601 04:18:40.408865   13556 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0601 04:18:40.408881   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:18:40.824000   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:18:40.833055   13556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:18:40.846500   13556 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:18:40.846568   13556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:18:40.859653   13556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:18:40.859688   13556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:18:41.605164   13556 out.go:204]   - Generating certificates and keys ...
	I0601 04:18:42.649022   13556 out.go:204]   - Booting up control plane ...
	I0601 04:18:40.667004   14036 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:40.667016   14036 pod_ready.go:81] duration metric: took 11.063266842s waiting for pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.667023   14036 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fgsgh" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.670891   14036 pod_ready.go:92] pod "kube-proxy-fgsgh" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:40.670899   14036 pod_ready.go:81] duration metric: took 3.871243ms waiting for pod "kube-proxy-fgsgh" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.670904   14036 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.675132   14036 pod_ready.go:92] pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:40.675141   14036 pod_ready.go:81] duration metric: took 4.221246ms waiting for pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.675147   14036 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:42.684353   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:44.685528   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:46.687697   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:48.688040   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:51.186606   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:53.187481   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:55.188655   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:57.687295   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:00.185897   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:02.686911   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:04.688219   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:07.185914   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:09.188825   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:11.688002   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:14.187662   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:16.188114   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:18.188168   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:20.188303   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:22.688347   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:25.186136   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:27.188737   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:29.685888   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:31.687374   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:33.688010   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:35.688223   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:38.186051   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:40.685861   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:43.184495   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:45.184738   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:47.188619   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:49.688777   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:52.187737   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:54.188673   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:56.685934   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:58.688410   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:01.185111   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:03.185648   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:05.186993   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:07.188774   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:09.189538   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:11.687713   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:13.688917   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:16.189541   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:18.689213   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:21.186825   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:23.187139   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:25.187762   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:27.687801   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:29.689029   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:32.186695   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:34.188405   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	W0601 04:20:37.567191   13556 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 04:20:37.567223   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:20:37.985183   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:20:37.995063   13556 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:20:37.995115   13556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:20:38.003134   13556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:20:38.003167   13556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:20:38.714980   13556 out.go:204]   - Generating certificates and keys ...
	I0601 04:20:36.688566   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:39.188270   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:39.157245   13556 out.go:204]   - Booting up control plane ...
	I0601 04:20:41.688898   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:44.185451   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:46.186959   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:48.685368   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:50.687302   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:53.186843   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:55.189047   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:57.189228   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:59.689548   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:02.185896   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:04.687858   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:07.189539   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:09.687226   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:11.689231   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:14.186006   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:16.188122   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:18.688041   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:20.695527   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:23.199009   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:25.203674   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:27.704456   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:30.208738   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:32.711751   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:34.714219   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:37.216949   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:39.714895   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:41.720032   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:44.217871   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:46.221799   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:48.720839   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:50.722793   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:53.221273   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:55.223691   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:57.723160   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:59.724889   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:02.222785   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:04.225050   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:06.723223   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:08.723519   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:10.726350   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:13.225782   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:15.229179   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:17.726361   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:20.226547   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:22.228014   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:24.725636   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:26.726651   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:29.225011   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:31.725015   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:33.726567   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:34.113538   13556 kubeadm.go:397] StartCluster complete in 8m1.165906933s
	I0601 04:22:34.113614   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:22:34.143687   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.143700   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:22:34.143755   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:22:34.173703   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.173716   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:22:34.173771   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:22:34.204244   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.204257   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:22:34.204312   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:22:34.235759   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.235775   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:22:34.235836   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:22:34.265295   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.265308   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:22:34.265362   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:22:34.294194   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.294207   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:22:34.294263   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:22:34.323578   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.323590   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:22:34.323645   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:22:34.353103   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.353115   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:22:34.353122   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:22:34.353128   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:22:34.396193   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:22:34.396212   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:22:34.408612   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:22:34.408626   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:22:34.471074   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:22:34.471086   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:22:34.471093   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:22:34.483079   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:22:34.483090   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:22:36.538288   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055125762s)
	W0601 04:22:36.538414   13556 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 04:22:36.538429   13556 out.go:239] * 
	W0601 04:22:36.538563   13556 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 04:22:36.538581   13556 out.go:239] * 
	W0601 04:22:36.539131   13556 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 04:22:36.603750   13556 out.go:177] 
	W0601 04:22:36.646990   13556 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 04:22:36.647054   13556 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 04:22:36.647091   13556 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 04:22:36.667708   13556 out.go:177] 
	I0601 04:22:36.224570   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:38.226521   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:40.720948   14036 pod_ready.go:81] duration metric: took 4m0.004760286s waiting for pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace to be "Ready" ...
	E0601 04:22:40.720960   14036 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 04:22:40.720971   14036 pod_ready.go:38] duration metric: took 4m11.140717277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:22:40.720995   14036 kubeadm.go:630] restartCluster took 4m21.223355239s
	W0601 04:22:40.721068   14036 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 04:22:40.721085   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:23:19.206218   14036 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.484454591s)
	I0601 04:23:19.206280   14036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:23:19.216115   14036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:23:19.224292   14036 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:23:19.224335   14036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:23:19.231888   14036 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:23:19.231915   14036 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:23:19.732100   14036 out.go:204]   - Generating certificates and keys ...
	I0601 04:23:20.637005   14036 out.go:204]   - Booting up control plane ...
	I0601 04:23:27.184771   14036 out.go:204]   - Configuring RBAC rules ...
	I0601 04:23:27.636872   14036 cni.go:95] Creating CNI manager for ""
	I0601 04:23:27.636887   14036 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:23:27.636907   14036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 04:23:27.636996   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=no-preload-20220601041659-2342 minikube.k8s.io/updated_at=2022_06_01T04_23_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:27.637021   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:27.652079   14036 ops.go:34] apiserver oom_adj: -16
	I0601 04:23:27.830215   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:28.430751   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:28.930740   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:29.431114   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:29.930775   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:30.430813   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:30.931405   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:31.431219   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:31.931379   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:32.432223   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:32.931433   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:33.432954   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:33.931528   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:34.430840   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:34.930779   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:35.431436   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:35.931212   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:36.431621   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:36.932267   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:37.430837   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:37.932483   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:38.433048   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:38.931387   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:39.432562   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:39.930919   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:40.431610   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:40.485985   14036 kubeadm.go:1045] duration metric: took 12.84888152s to wait for elevateKubeSystemPrivileges.
	I0601 04:23:40.486001   14036 kubeadm.go:397] StartCluster complete in 5m21.025467082s
	I0601 04:23:40.486017   14036 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:23:40.486096   14036 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:23:40.486639   14036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:23:41.002750   14036 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220601041659-2342" rescaled to 1
	I0601 04:23:41.002794   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:23:41.002822   14036 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 04:23:41.002790   14036 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:23:41.024225   14036 out.go:177] * Verifying Kubernetes components...
	I0601 04:23:41.002870   14036 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220601041659-2342"
	I0601 04:23:41.002871   14036 addons.go:65] Setting metrics-server=true in profile "no-preload-20220601041659-2342"
	I0601 04:23:41.002882   14036 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220601041659-2342"
	I0601 04:23:41.045170   14036 addons.go:153] Setting addon metrics-server=true in "no-preload-20220601041659-2342"
	W0601 04:23:41.045190   14036 addons.go:165] addon metrics-server should already be in state true
	I0601 04:23:41.002905   14036 addons.go:65] Setting dashboard=true in profile "no-preload-20220601041659-2342"
	I0601 04:23:41.045204   14036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:23:41.045206   14036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220601041659-2342"
	I0601 04:23:41.045224   14036 host.go:66] Checking if "no-preload-20220601041659-2342" exists ...
	I0601 04:23:41.002991   14036 config.go:178] Loaded profile config "no-preload-20220601041659-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:23:41.045243   14036 addons.go:153] Setting addon dashboard=true in "no-preload-20220601041659-2342"
	W0601 04:23:41.045258   14036 addons.go:165] addon dashboard should already be in state true
	I0601 04:23:41.024274   14036 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220601041659-2342"
	W0601 04:23:41.045272   14036 addons.go:165] addon storage-provisioner should already be in state true
	I0601 04:23:41.045301   14036 host.go:66] Checking if "no-preload-20220601041659-2342" exists ...
	I0601 04:23:41.045313   14036 host.go:66] Checking if "no-preload-20220601041659-2342" exists ...
	I0601 04:23:41.045537   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:23:41.045591   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:23:41.045632   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:23:41.045667   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:23:41.060815   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 04:23:41.063469   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:23:41.193932   14036 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 04:23:41.175016   14036 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220601041659-2342"
	I0601 04:23:41.229941   14036 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	W0601 04:23:41.229964   14036 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:23:41.267107   14036 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 04:23:41.283785   14036 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220601041659-2342" to be "Ready" ...
	I0601 04:23:41.304029   14036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 04:23:41.304040   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 04:23:41.304068   14036 host.go:66] Checking if "no-preload-20220601041659-2342" exists ...
	I0601 04:23:41.310158   14036 node_ready.go:49] node "no-preload-20220601041659-2342" has status "Ready":"True"
	I0601 04:23:41.378018   14036 node_ready.go:38] duration metric: took 73.984801ms waiting for node "no-preload-20220601041659-2342" to be "Ready" ...
	I0601 04:23:41.378031   14036 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:23:41.341073   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:23:41.341078   14036 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:23:41.378079   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:23:41.415181   14036 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 04:23:41.341460   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:23:41.383871   14036 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-4th8d" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:41.415313   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:23:41.451975   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 04:23:41.451990   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 04:23:41.452058   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:23:41.476653   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:23:41.567212   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:23:41.567687   14036 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:23:41.567699   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:23:41.567752   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:23:41.571152   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:23:41.641726   14036 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 04:23:41.641740   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 04:23:41.653203   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:23:41.729747   14036 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 04:23:41.729770   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 04:23:41.748626   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 04:23:41.748639   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 04:23:41.813442   14036 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:23:41.813470   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 04:23:41.835895   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 04:23:41.835912   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 04:23:41.838988   14036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:23:41.839581   14036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:23:41.918717   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 04:23:41.918736   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 04:23:41.946533   14036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:23:42.039688   14036 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 04:23:42.043776   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 04:23:42.043789   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 04:23:42.144037   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 04:23:42.144061   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 04:23:42.247696   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 04:23:42.247710   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 04:23:42.325940   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 04:23:42.325956   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 04:23:42.353302   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 04:23:42.353328   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 04:23:42.450282   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:23:42.450297   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 04:23:42.617845   14036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:23:42.836668   14036 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220601041659-2342"
	I0601 04:23:43.469737   14036 pod_ready.go:92] pod "coredns-64897985d-4th8d" in "kube-system" namespace has status "Ready":"True"
	I0601 04:23:43.469754   14036 pod_ready.go:81] duration metric: took 2.017769245s waiting for pod "coredns-64897985d-4th8d" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.469762   14036 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.479814   14036 pod_ready.go:92] pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:23:43.479825   14036 pod_ready.go:81] duration metric: took 10.057047ms waiting for pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.479832   14036 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.487495   14036 pod_ready.go:92] pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:23:43.487520   14036 pod_ready.go:81] duration metric: took 7.682581ms waiting for pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.487547   14036 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.494534   14036 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:23:43.494546   14036 pod_ready.go:81] duration metric: took 6.994344ms waiting for pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.494568   14036 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ff67" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.501300   14036 pod_ready.go:92] pod "kube-proxy-7ff67" in "kube-system" namespace has status "Ready":"True"
	I0601 04:23:43.501324   14036 pod_ready.go:81] duration metric: took 6.751865ms waiting for pod "kube-proxy-7ff67" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.501351   14036 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.749111   14036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.131199262s)
	I0601 04:23:43.774836   14036 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0601 04:23:43.849116   14036 addons.go:417] enableAddons completed in 2.846213477s
	I0601 04:23:43.868583   14036 pod_ready.go:92] pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:23:43.868594   14036 pod_ready.go:81] duration metric: took 367.23204ms waiting for pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.868604   14036 pod_ready.go:38] duration metric: took 2.490525792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:23:43.868620   14036 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:23:43.868683   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:23:43.878538   14036 api_server.go:71] duration metric: took 2.875633694s to wait for apiserver process to appear ...
	I0601 04:23:43.878552   14036 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:23:43.878559   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:23:43.883418   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 200:
	ok
	I0601 04:23:43.884917   14036 api_server.go:140] control plane version: v1.23.6
	I0601 04:23:43.884924   14036 api_server.go:130] duration metric: took 6.368843ms to wait for apiserver health ...
	I0601 04:23:43.884929   14036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:23:44.070448   14036 system_pods.go:59] 8 kube-system pods found
	I0601 04:23:44.070463   14036 system_pods.go:61] "coredns-64897985d-4th8d" [1e28756e-461b-4daf-a314-201279e6f280] Running
	I0601 04:23:44.070467   14036 system_pods.go:61] "etcd-no-preload-20220601041659-2342" [f062c2b7-2336-4312-b729-118c8a26d909] Running
	I0601 04:23:44.070477   14036 system_pods.go:61] "kube-apiserver-no-preload-20220601041659-2342" [3fb9b4d8-e18d-4c5a-812d-7c0f81615e1f] Running
	I0601 04:23:44.070482   14036 system_pods.go:61] "kube-controller-manager-no-preload-20220601041659-2342" [a9257f5c-b1fb-4410-ba04-98f3f46b470f] Running
	I0601 04:23:44.070488   14036 system_pods.go:61] "kube-proxy-7ff67" [bba8c125-49b9-46a6-bd15-66a15ed18932] Running
	I0601 04:23:44.070491   14036 system_pods.go:61] "kube-scheduler-no-preload-20220601041659-2342" [d2b3afb0-7805-466b-89d0-8bf20f418464] Running
	I0601 04:23:44.070501   14036 system_pods.go:61] "metrics-server-b955d9d8-dspp8" [da6693e5-ac7d-49f9-8894-8b27a22ee111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:23:44.070505   14036 system_pods.go:61] "storage-provisioner" [164a6111-4f51-4058-bfd8-8e81ce03ab6f] Running
	I0601 04:23:44.070509   14036 system_pods.go:74] duration metric: took 185.574848ms to wait for pod list to return data ...
	I0601 04:23:44.070514   14036 default_sa.go:34] waiting for default service account to be created ...
	I0601 04:23:44.266163   14036 default_sa.go:45] found service account: "default"
	I0601 04:23:44.266174   14036 default_sa.go:55] duration metric: took 195.654314ms for default service account to be created ...
	I0601 04:23:44.266179   14036 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 04:23:44.469975   14036 system_pods.go:86] 8 kube-system pods found
	I0601 04:23:44.469988   14036 system_pods.go:89] "coredns-64897985d-4th8d" [1e28756e-461b-4daf-a314-201279e6f280] Running
	I0601 04:23:44.469992   14036 system_pods.go:89] "etcd-no-preload-20220601041659-2342" [f062c2b7-2336-4312-b729-118c8a26d909] Running
	I0601 04:23:44.469996   14036 system_pods.go:89] "kube-apiserver-no-preload-20220601041659-2342" [3fb9b4d8-e18d-4c5a-812d-7c0f81615e1f] Running
	I0601 04:23:44.470000   14036 system_pods.go:89] "kube-controller-manager-no-preload-20220601041659-2342" [a9257f5c-b1fb-4410-ba04-98f3f46b470f] Running
	I0601 04:23:44.470003   14036 system_pods.go:89] "kube-proxy-7ff67" [bba8c125-49b9-46a6-bd15-66a15ed18932] Running
	I0601 04:23:44.470009   14036 system_pods.go:89] "kube-scheduler-no-preload-20220601041659-2342" [d2b3afb0-7805-466b-89d0-8bf20f418464] Running
	I0601 04:23:44.470015   14036 system_pods.go:89] "metrics-server-b955d9d8-dspp8" [da6693e5-ac7d-49f9-8894-8b27a22ee111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:23:44.470020   14036 system_pods.go:89] "storage-provisioner" [164a6111-4f51-4058-bfd8-8e81ce03ab6f] Running
	I0601 04:23:44.470024   14036 system_pods.go:126] duration metric: took 203.839765ms to wait for k8s-apps to be running ...
	I0601 04:23:44.470029   14036 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 04:23:44.470075   14036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:23:44.479988   14036 system_svc.go:56] duration metric: took 9.950615ms WaitForService to wait for kubelet.
	I0601 04:23:44.480003   14036 kubeadm.go:572] duration metric: took 3.477092877s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 04:23:44.480018   14036 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:23:44.669627   14036 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:23:44.669642   14036 node_conditions.go:123] node cpu capacity is 6
	I0601 04:23:44.669651   14036 node_conditions.go:105] duration metric: took 189.627547ms to run NodePressure ...
	I0601 04:23:44.669660   14036 start.go:213] waiting for startup goroutines ...
	I0601 04:23:44.700018   14036 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 04:23:44.722598   14036 out.go:177] * Done! kubectl is now configured to use "no-preload-20220601041659-2342" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:18:16 UTC, end at Wed 2022-06-01 11:24:44 UTC. --
	Jun 01 11:22:57 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:22:57.032711600Z" level=info msg="ignoring event" container=cac11b58a3cc48e6d372ab8e3ea1869396f08f9223447960515eb5e95e12ecea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:22:57 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:22:57.190043064Z" level=info msg="ignoring event" container=3297ecd3a79b8e2555bfea1ba389bb82c2b0716bfde7c706fbd4a8e1f44c1797 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:22:57 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:22:57.427314872Z" level=info msg="ignoring event" container=fbdb258171d75d98f185e3d5e888796f1bbbfd5f353bab1f116a8685d1320114 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:22:57 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:22:57.543112672Z" level=info msg="ignoring event" container=3f8ade1034933d463366bb1a856f4b47717dff8ad694a420f8b5d24386de8180 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:22:57 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:22:57.662315455Z" level=info msg="ignoring event" container=01045ae278a204c9119c01c95bc94c433879d2e92125b6f492a9f8760753b48b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:07 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:07.753645699Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=5d7c2e3d72fe9009f71d9e49b25a461f6b111a7657c8dd2cc6a4815728e598af
	Jun 01 11:23:07 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:07.808804529Z" level=info msg="ignoring event" container=5d7c2e3d72fe9009f71d9e49b25a461f6b111a7657c8dd2cc6a4815728e598af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:17 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:17.877612440Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=57fbfccc998db276eb09e3aa6df866227f86fa1043e28783206a47098ab8d1e2
	Jun 01 11:23:17 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:17.906174016Z" level=info msg="ignoring event" container=57fbfccc998db276eb09e3aa6df866227f86fa1043e28783206a47098ab8d1e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:18 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:18.010931680Z" level=info msg="ignoring event" container=254cea236d30b950c485b96f520899815cb7a6d570a78a6df40a5de4c927dda3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:18 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:18.114730603Z" level=info msg="ignoring event" container=1e33579088f2c1c8a8c88d5587b71940c8bf92928270f8fc2cb07ef429c4bc74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:18 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:18.219816478Z" level=info msg="ignoring event" container=57466844373b9d83cf886e94d47f7e9309ae7b4668f8b9659a9c571a7120b0de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:18 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:18.351475023Z" level=info msg="ignoring event" container=3f714de6344bb4a29f0e59684a9b1eb148ed6cb07dbef1c1c67db88eef097faf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:43 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:43.511008128Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:23:43 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:43.511075446Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:23:43 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:43.512680720Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:23:45 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:45.612898249Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 11:23:45 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:45.821166177Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 11:23:49 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:49.063495338Z" level=info msg="ignoring event" container=b2efd81fcfbf952661f88b51f3ff217a47410f4da8aa8affe391906827b571ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:49 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:49.079413840Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	Jun 01 11:23:49 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:49.650068343Z" level=info msg="ignoring event" container=f5becf26faa418f08d0eb667b16f1da433ffedaddc5edee88000ca9b67f25f0e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:24:00 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:24:00.852203954Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:24:00 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:24:00.852337102Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:24:00 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:24:00.853615723Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:24:04 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:24:04.948083924Z" level=info msg="ignoring event" container=da95c41640e4443a86eadcfe89569bc1e020b9315adf526c8addb12e0852f358 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	da95c41640e44       a90209bb39e3d                                                                                    40 seconds ago       Exited              dashboard-metrics-scraper   2                   64d304f23d052
	d096a82b3ce0e       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   51 seconds ago       Running             kubernetes-dashboard        0                   ab25ec0d8943c
	f288409ac9503       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   72832b818d097
	8ec6fde60edd1       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   5a550f203ec45
	f66543b57764b       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   5b1302cc8d6f7
	d2b9c51db2b2b       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   ad72a05d02460
	64c417b589d52       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   1f70d997cffdb
	ac52a9b775d7d       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   5bc8932ec5c76
	10f840b7399c6       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   e30a00d0dbfb0
	
	* 
	* ==> coredns [8ec6fde60edd] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220601041659-2342
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220601041659-2342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=no-preload-20220601041659-2342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T04_23_27_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:23:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220601041659-2342
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:24:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:24:42 +0000   Wed, 01 Jun 2022 11:24:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:24:42 +0000   Wed, 01 Jun 2022 11:24:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:24:42 +0000   Wed, 01 Jun 2022 11:24:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 11:24:42 +0000   Wed, 01 Jun 2022 11:24:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    no-preload-20220601041659-2342
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                e23cd456-aaaa-4c54-8dbe-cf17db0b9e1d
	  Boot ID:                    f65ff030-0ce1-451f-b056-a175624cc17c
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-4th8d                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-no-preload-20220601041659-2342                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kube-apiserver-no-preload-20220601041659-2342             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-no-preload-20220601041659-2342    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-7ff67                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-no-preload-20220601041659-2342             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 metrics-server-b955d9d8-dspp8                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-97rct                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-jxm74                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 62s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    83s (x4 over 83s)  kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x4 over 83s)  kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  83s (x5 over 83s)  kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasSufficientMemory
	  Normal  Starting                 77s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s                kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s                kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s                kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                67s                kubelet     Node no-preload-20220601041659-2342 status is now: NodeReady
	  Normal  Starting                 3s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2s (x2 over 3s)    kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s (x2 over 3s)    kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s (x2 over 3s)    kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2s                 kubelet     Node no-preload-20220601041659-2342 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2s                 kubelet     Node no-preload-20220601041659-2342 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [ac52a9b775d7] <==
	* {"level":"info","ts":"2022-06-01T11:23:22.147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-01T11:23:22.147Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-01T11:23:22.148Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:23:22.148Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:23:22.148Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:23:22.148Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:23:22.148Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:23:22.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:23:22.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:23:22.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:23:22.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:23:22.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:23:22.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:23:22.742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:23:22.742Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:no-preload-20220601041659-2342 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:23:22.744Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:23:22.744Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:24:44 up  1:05,  0 users,  load average: 0.27, 0.66, 0.86
	Linux no-preload-20220601041659-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [64c417b589d5] <==
	* I0601 11:23:25.898650       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:23:25.921086       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:23:25.987476       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:23:25.991194       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0601 11:23:25.991837       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:23:25.994422       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:23:26.779298       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:23:27.418634       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:23:27.426680       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:23:27.436953       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:23:27.593387       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:23:40.071234       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:23:40.532927       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:23:42.251826       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:23:42.843000       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.105.218.124]
	W0601 11:23:43.653241       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:23:43.653292       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:23:43.653298       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:23:43.751938       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.110.187.56]
	I0601 11:23:43.760773       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.99.182.160]
	W0601 11:24:43.611862       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:24:43.611964       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:24:43.611990       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [10f840b7399c] <==
	* I0601 11:23:43.617465       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:23:43.622822       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	E0601 11:23:43.631680       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:23:43.635642       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:23:43.635854       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:23:43.635868       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:23:43.642464       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:23:43.643223       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:23:43.643583       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:23:43.651782       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:23:43.652041       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:23:43.659606       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-jxm74"
	I0601 11:23:43.672056       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-97rct"
	E0601 11:24:41.338577       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0601 11:24:41.388194       1 event.go:294] "Event occurred" object="no-preload-20220601041659-2342" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node no-preload-20220601041659-2342 status is now: NodeNotReady"
	I0601 11:24:41.397463       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-no-preload-20220601041659-2342" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	W0601 11:24:41.397982       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0601 11:24:41.402750       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-no-preload-20220601041659-2342" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:41.410640       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d-4th8d" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:41.417418       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77-jxm74" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:41.501273       1 event.go:294] "Event occurred" object="kube-system/etcd-no-preload-20220601041659-2342" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:41.513971       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-7ff67" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:41.519888       1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:41.525501       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 11:24:41.525626       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-no-preload-20220601041659-2342" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [f66543b57764] <==
	* I0601 11:23:42.141364       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:23:42.141434       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:23:42.141572       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:23:42.246760       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:23:42.246832       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:23:42.246838       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:23:42.246847       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:23:42.247168       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:23:42.249477       1 config.go:317] "Starting service config controller"
	I0601 11:23:42.249540       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:23:42.249578       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:23:42.249582       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:23:42.350230       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:23:42.350261       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [d2b9c51db2b2] <==
	* W0601 11:23:24.738669       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:23:24.738700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:23:24.738814       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:23:24.738935       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:23:24.738875       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:23:24.738998       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:23:24.739104       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:23:24.739849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:23:24.739083       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:23:24.739885       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:23:25.558895       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:23:25.558945       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:23:25.613199       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:23:25.613237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:23:25.613904       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:23:25.613936       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:23:25.623358       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:23:25.623392       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:23:25.660804       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 11:23:25.660842       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:23:25.736301       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:23:25.736340       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 11:23:25.895320       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:23:25.895338       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:23:27.732245       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:18:16 UTC, end at Wed 2022-06-01 11:24:45 UTC. --
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.034291    7101 topology_manager.go:200] "Topology Admit Handler"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.096816    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bba8c125-49b9-46a6-bd15-66a15ed18932-xtables-lock\") pod \"kube-proxy-7ff67\" (UID: \"bba8c125-49b9-46a6-bd15-66a15ed18932\") " pod="kube-system/kube-proxy-7ff67"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097017    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn6w6\" (UniqueName: \"kubernetes.io/projected/da6693e5-ac7d-49f9-8894-8b27a22ee111-kube-api-access-dn6w6\") pod \"metrics-server-b955d9d8-dspp8\" (UID: \"da6693e5-ac7d-49f9-8894-8b27a22ee111\") " pod="kube-system/metrics-server-b955d9d8-dspp8"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097089    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882nv\" (UniqueName: \"kubernetes.io/projected/164a6111-4f51-4058-bfd8-8e81ce03ab6f-kube-api-access-882nv\") pod \"storage-provisioner\" (UID: \"164a6111-4f51-4058-bfd8-8e81ce03ab6f\") " pod="kube-system/storage-provisioner"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097148    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/10921038-abaf-4aac-94d4-5d91d28cb902-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-97rct\" (UID: \"10921038-abaf-4aac-94d4-5d91d28cb902\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-97rct"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097286    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d94sj\" (UniqueName: \"kubernetes.io/projected/bba8c125-49b9-46a6-bd15-66a15ed18932-kube-api-access-d94sj\") pod \"kube-proxy-7ff67\" (UID: \"bba8c125-49b9-46a6-bd15-66a15ed18932\") " pod="kube-system/kube-proxy-7ff67"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097364    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/164a6111-4f51-4058-bfd8-8e81ce03ab6f-tmp\") pod \"storage-provisioner\" (UID: \"164a6111-4f51-4058-bfd8-8e81ce03ab6f\") " pod="kube-system/storage-provisioner"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097425    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bba8c125-49b9-46a6-bd15-66a15ed18932-kube-proxy\") pod \"kube-proxy-7ff67\" (UID: \"bba8c125-49b9-46a6-bd15-66a15ed18932\") " pod="kube-system/kube-proxy-7ff67"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097478    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bba8c125-49b9-46a6-bd15-66a15ed18932-lib-modules\") pod \"kube-proxy-7ff67\" (UID: \"bba8c125-49b9-46a6-bd15-66a15ed18932\") " pod="kube-system/kube-proxy-7ff67"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097556    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/da6693e5-ac7d-49f9-8894-8b27a22ee111-tmp-dir\") pod \"metrics-server-b955d9d8-dspp8\" (UID: \"da6693e5-ac7d-49f9-8894-8b27a22ee111\") " pod="kube-system/metrics-server-b955d9d8-dspp8"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097619    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1978d3c4-656a-4b2d-87a0-a796070dbce3-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-jxm74\" (UID: \"1978d3c4-656a-4b2d-87a0-a796070dbce3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-jxm74"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097708    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqql7\" (UniqueName: \"kubernetes.io/projected/1978d3c4-656a-4b2d-87a0-a796070dbce3-kube-api-access-pqql7\") pod \"kubernetes-dashboard-8469778f77-jxm74\" (UID: \"1978d3c4-656a-4b2d-87a0-a796070dbce3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-jxm74"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097794    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e28756e-461b-4daf-a314-201279e6f280-config-volume\") pod \"coredns-64897985d-4th8d\" (UID: \"1e28756e-461b-4daf-a314-201279e6f280\") " pod="kube-system/coredns-64897985d-4th8d"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097854    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsdt9\" (UniqueName: \"kubernetes.io/projected/10921038-abaf-4aac-94d4-5d91d28cb902-kube-api-access-dsdt9\") pod \"dashboard-metrics-scraper-56974995fc-97rct\" (UID: \"10921038-abaf-4aac-94d4-5d91d28cb902\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-97rct"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097917    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wwv2\" (UniqueName: \"kubernetes.io/projected/1e28756e-461b-4daf-a314-201279e6f280-kube-api-access-7wwv2\") pod \"coredns-64897985d-4th8d\" (UID: \"1e28756e-461b-4daf-a314-201279e6f280\") " pod="kube-system/coredns-64897985d-4th8d"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.098021    7101 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:43.454431    7101 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220601041659-2342\" already exists" pod="kube-system/kube-scheduler-no-preload-20220601041659-2342"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:43.634926    7101 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220601041659-2342\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220601041659-2342"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:43.834724    7101 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220601041659-2342\" already exists" pod="kube-system/etcd-no-preload-20220601041659-2342"
	Jun 01 11:24:44 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:44.035239    7101 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220601041659-2342\" already exists" pod="kube-system/kube-apiserver-no-preload-20220601041659-2342"
	Jun 01 11:24:44 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:44.199867    7101 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:24:44 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:44.200055    7101 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bba8c125-49b9-46a6-bd15-66a15ed18932-kube-proxy podName:bba8c125-49b9-46a6-bd15-66a15ed18932 nodeName:}" failed. No retries permitted until 2022-06-01 11:24:44.700028949 +0000 UTC m=+3.062232882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/bba8c125-49b9-46a6-bd15-66a15ed18932-kube-proxy") pod "kube-proxy-7ff67" (UID: "bba8c125-49b9-46a6-bd15-66a15ed18932") : failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:24:44 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:44.199877    7101 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:24:44 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:44.200519    7101 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1e28756e-461b-4daf-a314-201279e6f280-config-volume podName:1e28756e-461b-4daf-a314-201279e6f280 nodeName:}" failed. No retries permitted until 2022-06-01 11:24:44.700497237 +0000 UTC m=+3.062701168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1e28756e-461b-4daf-a314-201279e6f280-config-volume") pod "coredns-64897985d-4th8d" (UID: "1e28756e-461b-4daf-a314-201279e6f280") : failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:24:44 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:44.229922    7101 request.go:665] Waited for 1.195398024s due to client-side throttling, not priority and fairness, request: GET:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)coredns&limit=500&resourceVersion=0
	
	* 
	* ==> kubernetes-dashboard [d096a82b3ce0] <==
	* 2022/06/01 11:23:54 Using namespace: kubernetes-dashboard
	2022/06/01 11:23:54 Using in-cluster config to connect to apiserver
	2022/06/01 11:23:54 Using secret token for csrf signing
	2022/06/01 11:23:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/01 11:23:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/01 11:23:54 Successful initial request to the apiserver, version: v1.23.6
	2022/06/01 11:23:54 Generating JWE encryption key
	2022/06/01 11:23:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/01 11:23:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/01 11:23:54 Initializing JWE encryption key from synchronized object
	2022/06/01 11:23:54 Creating in-cluster Sidecar client
	2022/06/01 11:23:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 11:23:54 Serving insecurely on HTTP port: 9090
	2022/06/01 11:24:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 11:23:54 Starting overwatch
	
	* 
	* ==> storage-provisioner [f288409ac950] <==
	* I0601 11:23:43.258660       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 11:23:43.270224       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 11:23:43.270277       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 11:23:43.275536       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 11:23:43.275771       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220601041659-2342_c96fd14b-cab0-4ca2-826a-3b61b8568c60!
	I0601 11:23:43.276369       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f0eb7c0-4c28-4a00-817e-70070eb6ac8d", APIVersion:"v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220601041659-2342_c96fd14b-cab0-4ca2-826a-3b61b8568c60 became leader
	I0601 11:23:43.376685       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220601041659-2342_c96fd14b-cab0-4ca2-826a-3b61b8568c60!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220601041659-2342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-dspp8
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220601041659-2342 describe pod metrics-server-b955d9d8-dspp8
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220601041659-2342 describe pod metrics-server-b955d9d8-dspp8: exit status 1 (301.784644ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-dspp8" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220601041659-2342 describe pod metrics-server-b955d9d8-dspp8: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601041659-2342
helpers_test.go:235: (dbg) docker inspect no-preload-20220601041659-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a0a7f863fcaae7c8775c1087db056361b49157a3c3ea7f3bbb3d26debd94b45",
	        "Created": "2022-06-01T11:17:01.899467855Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 229382,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:18:16.054818263Z",
	            "FinishedAt": "2022-06-01T11:18:14.113950323Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/3a0a7f863fcaae7c8775c1087db056361b49157a3c3ea7f3bbb3d26debd94b45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a0a7f863fcaae7c8775c1087db056361b49157a3c3ea7f3bbb3d26debd94b45/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a0a7f863fcaae7c8775c1087db056361b49157a3c3ea7f3bbb3d26debd94b45/hosts",
	        "LogPath": "/var/lib/docker/containers/3a0a7f863fcaae7c8775c1087db056361b49157a3c3ea7f3bbb3d26debd94b45/3a0a7f863fcaae7c8775c1087db056361b49157a3c3ea7f3bbb3d26debd94b45-json.log",
	        "Name": "/no-preload-20220601041659-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220601041659-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220601041659-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/03ec63b0e0b366eaac77a3af0e76c22dd274ee509cf51c2f95b155b864c534a2-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03ec63b0e0b366eaac77a3af0e76c22dd274ee509cf51c2f95b155b864c534a2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03ec63b0e0b366eaac77a3af0e76c22dd274ee509cf51c2f95b155b864c534a2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03ec63b0e0b366eaac77a3af0e76c22dd274ee509cf51c2f95b155b864c534a2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220601041659-2342",
	                "Source": "/var/lib/docker/volumes/no-preload-20220601041659-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220601041659-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220601041659-2342",
	                "name.minikube.sigs.k8s.io": "no-preload-20220601041659-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79429c5e2d9174bd85f397e0c1392083863399f885d3240ae7ca6278b00efb76",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53165"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53166"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53162"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/79429c5e2d91",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220601041659-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3a0a7f863fca",
	                        "no-preload-20220601041659-2342"
	                    ],
	                    "NetworkID": "a99de4b0c2de36bb282e54aada7d2e4017796d0c8751cdbcbb3a530355a143b4",
	                    "EndpointID": "a5d53691e7b0ad947beeea84b395d671ed663b5e3f0c49ea7480e20c2ecac4c2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220601041659-2342 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220601041659-2342 logs -n 25: (2.756861225s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                | disable-driver-mounts-20220601040914-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	|         | disable-driver-mounts-20220601040914-2342         |                                           |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:09 PDT | 01 Jun 22 04:09 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:10 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220601040844-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:14 PDT | 01 Jun 22 04:14 PDT |
	|         | old-k8s-version-20220601040844-2342               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220601040844-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:14 PDT | 01 Jun 22 04:14 PDT |
	|         | old-k8s-version-20220601040844-2342               |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:10 PDT | 01 Jun 22 04:15 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| logs    | embed-certs-20220601040915-2342                   | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| logs    | embed-certs-20220601040915-2342                   | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601040915-2342           | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                           |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:17 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| logs    | old-k8s-version-20220601040844-2342               | old-k8s-version-20220601040844-2342       | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:22 PDT | 01 Jun 22 04:22 PDT |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:23 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --memory=2200                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| logs    | no-preload-20220601041659-2342                    | no-preload-20220601041659-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:18:14
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:18:14.774878   14036 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:18:14.775106   14036 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:18:14.775111   14036 out.go:309] Setting ErrFile to fd 2...
	I0601 04:18:14.775115   14036 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:18:14.775218   14036 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:18:14.775473   14036 out.go:303] Setting JSON to false
	I0601 04:18:14.790201   14036 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":4664,"bootTime":1654077630,"procs":351,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:18:14.790325   14036 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:18:14.812835   14036 out.go:177] * [no-preload-20220601041659-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:18:14.855264   14036 notify.go:193] Checking for updates...
	I0601 04:18:14.877395   14036 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:18:14.899376   14036 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:18:14.921165   14036 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:18:14.942588   14036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:18:14.964495   14036 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:18:14.986858   14036 config.go:178] Loaded profile config "no-preload-20220601041659-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:18:14.987481   14036 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:18:15.060132   14036 docker.go:137] docker version: linux-20.10.14
	I0601 04:18:15.060314   14036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:18:15.195804   14036 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:18:15.147757293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:18:15.239675   14036 out.go:177] * Using the docker driver based on existing profile
	I0601 04:18:15.261306   14036 start.go:284] selected driver: docker
	I0601 04:18:15.261319   14036 start.go:806] validating driver "docker" against &{Name:no-preload-20220601041659-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601041659-2342 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Scheduled
Stop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:18:15.261387   14036 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:18:15.263519   14036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:18:15.389107   14036 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:18:15.340671415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:18:15.389294   14036 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 04:18:15.389312   14036 cni.go:95] Creating CNI manager for ""
	I0601 04:18:15.389320   14036 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:18:15.389327   14036 start_flags.go:306] config:
	{Name:no-preload-20220601041659-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601041659-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:18:15.411303   14036 out.go:177] * Starting control plane node no-preload-20220601041659-2342 in cluster no-preload-20220601041659-2342
	I0601 04:18:15.432174   14036 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:18:15.453963   14036 out.go:177] * Pulling base image ...
	I0601 04:18:15.496035   14036 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:18:15.496046   14036 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:18:15.496172   14036 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/config.json ...
	I0601 04:18:15.496274   14036 cache.go:107] acquiring lock: {Name:mk3e9a6bf873842d2e5ca428e419405f67698986 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.496248   14036 cache.go:107] acquiring lock: {Name:mk6cdcb3277425415932624173a7b7ca3460ec43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497016   14036 cache.go:107] acquiring lock: {Name:mk5aea169468c70908c7500bcfea18f2c75c6bec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497323   14036 cache.go:107] acquiring lock: {Name:mk0ce8763eede5207a594beee88851a0e339bc7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497362   14036 cache.go:107] acquiring lock: {Name:mk735d5a3617189a069af22bcee4c9a1653c60c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497430   14036 cache.go:107] acquiring lock: {Name:mkbce65c6aa4c06171eeb95b8350c15ff2252191 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497532   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0601 04:18:15.497461   14036 cache.go:107] acquiring lock: {Name:mkc7860c5e3d5dd07d6a0cd1126cb14b20ddb5fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497556   14036 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 1.295562ms
	I0601 04:18:15.497574   14036 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0601 04:18:15.497985   14036 cache.go:107] acquiring lock: {Name:mk2917ee5d109fb25f09b3f463d8b7c0891736eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.497986   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 exists
	I0601 04:18:15.498016   14036 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6" took 1.771839ms
	I0601 04:18:15.498038   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0601 04:18:15.498038   14036 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 succeeded
	I0601 04:18:15.498057   14036 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.82584ms
	I0601 04:18:15.498071   14036 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0601 04:18:15.498155   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 exists
	I0601 04:18:15.498156   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 exists
	I0601 04:18:15.498158   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0601 04:18:15.498169   14036 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6" took 1.113538ms
	I0601 04:18:15.498179   14036 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 succeeded
	I0601 04:18:15.498177   14036 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6" took 1.19488ms
	I0601 04:18:15.498188   14036 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 succeeded
	I0601 04:18:15.498180   14036 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 1.089655ms
	I0601 04:18:15.498208   14036 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0601 04:18:15.498227   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 exists
	I0601 04:18:15.498223   14036 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0601 04:18:15.498233   14036 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6" took 896.28µs
	I0601 04:18:15.498240   14036 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 1.200632ms
	I0601 04:18:15.498243   14036 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 succeeded
	I0601 04:18:15.498248   14036 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0601 04:18:15.498261   14036 cache.go:87] Successfully saved all images to host disk.
	I0601 04:18:15.562486   14036 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:18:15.562503   14036 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:18:15.562514   14036 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:18:15.562561   14036 start.go:352] acquiring machines lock for no-preload-20220601041659-2342: {Name:mk58caff34cdda9e203618eaf8e1336a225589ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:18:15.562634   14036 start.go:356] acquired machines lock for "no-preload-20220601041659-2342" in 62.594µs
	I0601 04:18:15.562660   14036 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:18:15.562670   14036 fix.go:55] fixHost starting: 
	I0601 04:18:15.562891   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:18:15.632241   14036 fix.go:103] recreateIfNeeded on no-preload-20220601041659-2342: state=Stopped err=<nil>
	W0601 04:18:15.632274   14036 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:18:15.654231   14036 out.go:177] * Restarting existing docker container for "no-preload-20220601041659-2342" ...
	I0601 04:18:15.403723   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:15.481185   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:15.512966   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.512977   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:15.513037   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:15.544508   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.544521   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:15.544567   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:15.581483   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.581494   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:15.581555   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:15.613508   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.613522   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:15.613578   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:15.645122   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.645148   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:15.645206   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:15.675331   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.675344   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:15.675397   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:15.706115   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.706144   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:15.706233   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:15.738582   13556 logs.go:274] 0 containers: []
	W0601 04:18:15.738596   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:15.738604   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:15.738612   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:15.803326   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:15.803338   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:15.803345   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:15.821038   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:15.821061   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:17.883284   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06218452s)
	I0601 04:18:17.883398   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:17.883406   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:17.927628   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:17.927643   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:15.696056   14036 cli_runner.go:164] Run: docker start no-preload-20220601041659-2342
	I0601 04:18:16.064616   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:18:16.139380   14036 kic.go:416] container "no-preload-20220601041659-2342" state is running.
	I0601 04:18:16.140133   14036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220601041659-2342
	I0601 04:18:16.223932   14036 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/config.json ...
	I0601 04:18:16.224334   14036 machine.go:88] provisioning docker machine ...
	I0601 04:18:16.224356   14036 ubuntu.go:169] provisioning hostname "no-preload-20220601041659-2342"
	I0601 04:18:16.224451   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:16.304277   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:16.304462   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:16.304479   14036 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220601041659-2342 && echo "no-preload-20220601041659-2342" | sudo tee /etc/hostname
	I0601 04:18:16.433244   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220601041659-2342
	
	I0601 04:18:16.433319   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:16.508502   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:16.508694   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:16.508709   14036 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220601041659-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220601041659-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220601041659-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:18:16.629821   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:18:16.629881   14036 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:18:16.629920   14036 ubuntu.go:177] setting up certificates
	I0601 04:18:16.629931   14036 provision.go:83] configureAuth start
	I0601 04:18:16.630007   14036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220601041659-2342
	I0601 04:18:16.783851   14036 provision.go:138] copyHostCerts
	I0601 04:18:16.783934   14036 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:18:16.783942   14036 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:18:16.784048   14036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:18:16.784282   14036 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:18:16.784290   14036 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:18:16.784348   14036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:18:16.784513   14036 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:18:16.784521   14036 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:18:16.784583   14036 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:18:16.784734   14036 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220601041659-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220601041659-2342]
	I0601 04:18:16.853835   14036 provision.go:172] copyRemoteCerts
	I0601 04:18:16.853899   14036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:18:16.853944   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:16.930312   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:17.016327   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 04:18:17.035298   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 04:18:17.053766   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:18:17.073791   14036 provision.go:86] duration metric: configureAuth took 443.84114ms
	I0601 04:18:17.073803   14036 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:18:17.073938   14036 config.go:178] Loaded profile config "no-preload-20220601041659-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:18:17.073997   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.145242   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:17.145411   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:17.145424   14036 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:18:17.264247   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:18:17.264260   14036 ubuntu.go:71] root file system type: overlay
	I0601 04:18:17.264374   14036 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:18:17.264440   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.337947   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:17.338110   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:17.338170   14036 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:18:17.464122   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:18:17.464206   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.535287   14036 main.go:134] libmachine: Using SSH client type: native
	I0601 04:18:17.535439   14036 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53163 <nil> <nil>}
	I0601 04:18:17.535452   14036 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:18:17.656716   14036 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:18:17.656732   14036 machine.go:91] provisioned docker machine in 1.432374179s
	I0601 04:18:17.656739   14036 start.go:306] post-start starting for "no-preload-20220601041659-2342" (driver="docker")
	I0601 04:18:17.656743   14036 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:18:17.656811   14036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:18:17.656865   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.727465   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:17.821284   14036 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:18:17.825772   14036 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:18:17.825789   14036 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:18:17.825804   14036 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:18:17.825812   14036 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:18:17.825821   14036 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:18:17.825928   14036 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:18:17.826061   14036 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:18:17.826225   14036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:18:17.833508   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:18:17.850697   14036 start.go:309] post-start completed in 193.939485ms
	I0601 04:18:17.850781   14036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:18:17.850824   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:17.926433   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:18.009814   14036 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:18:18.014092   14036 fix.go:57] fixHost completed within 2.451394081s
	I0601 04:18:18.014102   14036 start.go:81] releasing machines lock for "no-preload-20220601041659-2342", held for 2.451434151s
	I0601 04:18:18.014172   14036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220601041659-2342
	I0601 04:18:18.086484   14036 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:18:18.086486   14036 ssh_runner.go:195] Run: systemctl --version
	I0601 04:18:18.086599   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:18.086599   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:18.167367   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:18.170314   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:18:18.252773   14036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:18:18.385785   14036 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:18:18.396076   14036 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:18:18.396130   14036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:18:18.405470   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:18:18.418490   14036 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:18:18.486961   14036 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:18:18.553762   14036 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:18:18.563812   14036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:18:18.628982   14036 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:18:18.638220   14036 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:18:18.674484   14036 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:18:18.753443   14036 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 04:18:18.753615   14036 cli_runner.go:164] Run: docker exec -t no-preload-20220601041659-2342 dig +short host.docker.internal
	I0601 04:18:18.881761   14036 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:18:18.881851   14036 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:18:18.886028   14036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:18:18.895724   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:18.966314   14036 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:18:18.966372   14036 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:18:18.998564   14036 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 04:18:18.998580   14036 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:18:18.998654   14036 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:18:19.075561   14036 cni.go:95] Creating CNI manager for ""
	I0601 04:18:19.075576   14036 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:18:19.075610   14036 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 04:18:19.075634   14036 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220601041659-2342 NodeName:no-preload-20220601041659-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:18:19.075750   14036 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "no-preload-20220601041659-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:18:19.075828   14036 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=no-preload-20220601041659-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601041659-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 04:18:19.075885   14036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 04:18:19.083584   14036 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:18:19.083642   14036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:18:19.090501   14036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (356 bytes)
	I0601 04:18:19.102643   14036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:18:19.114867   14036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes)
	I0601 04:18:19.127704   14036 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:18:19.131229   14036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:18:19.140635   14036 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342 for IP: 192.168.49.2
	I0601 04:18:19.140743   14036 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:18:19.140794   14036 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:18:19.140880   14036 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.key
	I0601 04:18:19.140951   14036 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/apiserver.key.dd3b5fb2
	I0601 04:18:19.141000   14036 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/proxy-client.key
	I0601 04:18:19.141188   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:18:19.141229   14036 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:18:19.141241   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:18:19.141271   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:18:19.141304   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:18:19.141334   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:18:19.141394   14036 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:18:19.141961   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:18:19.159226   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 04:18:19.175596   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:18:19.192259   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 04:18:19.210061   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:18:19.226574   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:18:19.243363   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:18:19.260116   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:18:19.277176   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:18:19.293972   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:18:19.310746   14036 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:18:19.327971   14036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:18:19.340461   14036 ssh_runner.go:195] Run: openssl version
	I0601 04:18:19.345544   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:18:19.353245   14036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:18:19.357005   14036 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:18:19.357044   14036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:18:19.361993   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:18:19.369033   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:18:19.376927   14036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:18:19.380775   14036 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:18:19.380813   14036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:18:19.385890   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:18:19.392971   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:18:19.400951   14036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:18:19.405006   14036 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:18:19.405058   14036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:18:19.410775   14036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:18:19.418340   14036 kubeadm.go:395] StartCluster: {Name:no-preload-20220601041659-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220601041659-2342 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:18:19.418443   14036 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:18:19.448862   14036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:18:19.456368   14036 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:18:19.456381   14036 kubeadm.go:626] restartCluster start
	I0601 04:18:19.456423   14036 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:18:19.463881   14036 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:19.463941   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:18:19.568912   14036 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220601041659-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:18:19.569088   14036 kubeconfig.go:127] "no-preload-20220601041659-2342" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 04:18:19.569472   14036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:18:19.570824   14036 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:18:19.578849   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:19.578930   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:19.587781   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.440923   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:20.483332   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:20.514761   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.514773   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:20.514833   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:20.546039   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.546053   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:20.546108   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:20.575400   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.575414   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:20.575469   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:20.606603   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.606617   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:20.606680   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:20.635837   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.635849   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:20.635906   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:20.666144   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.666157   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:20.666211   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:20.694854   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.694866   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:20.694924   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:20.725318   13556 logs.go:274] 0 containers: []
	W0601 04:18:20.725331   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:20.725338   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:20.725345   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:20.778767   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:20.778778   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:20.778785   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:20.790876   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:20.790888   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:22.843261   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05233999s)
	I0601 04:18:22.843425   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:22.843432   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:22.886071   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:22.886084   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:19.789971   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:19.799683   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:19.810034   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:19.990006   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:19.990226   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.000773   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.190022   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.190234   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.201267   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.387932   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.388133   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.399100   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.588039   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.588104   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.597935   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.788903   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.788959   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:20.798457   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:20.990036   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:20.990239   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.000901   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.189970   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.190109   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.202580   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.390048   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.390138   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.402774   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.590024   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.590200   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.601412   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.787917   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.787977   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:21.797125   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:21.990025   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:21.990212   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.001077   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.190009   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:22.190214   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.201491   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.390189   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:22.390292   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.401348   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.588348   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:22.588437   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.597421   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.597432   14036 api_server.go:165] Checking apiserver status ...
	I0601 04:18:22.597484   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:18:22.605651   14036 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.605661   14036 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 04:18:22.605669   14036 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:18:22.605721   14036 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:18:22.637821   14036 docker.go:442] Stopping containers: [f3ab122c826b 1734d7965330 83819780dd93 b9768112bb8b 84611d08ab8e 48c8256f94d6 651e4e6fd977 4702c401989d f48f3e09df46 e1fc171fe8aa cd2a23f7c38c 85e4aa0cd1f6 030ece384801 b93e15c9f0f8 03abb63ba5d1 f241878ca7d9]
	I0601 04:18:22.637899   14036 ssh_runner.go:195] Run: docker stop f3ab122c826b 1734d7965330 83819780dd93 b9768112bb8b 84611d08ab8e 48c8256f94d6 651e4e6fd977 4702c401989d f48f3e09df46 e1fc171fe8aa cd2a23f7c38c 85e4aa0cd1f6 030ece384801 b93e15c9f0f8 03abb63ba5d1 f241878ca7d9
	I0601 04:18:22.668131   14036 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:18:22.678704   14036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:18:22.686703   14036 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Jun  1 11:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  1 11:17 /etc/kubernetes/scheduler.conf
	
	I0601 04:18:22.686754   14036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 04:18:22.694474   14036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 04:18:22.701738   14036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 04:18:22.708977   14036 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.709021   14036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 04:18:22.716105   14036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 04:18:22.723106   14036 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:18:22.723152   14036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 04:18:22.729915   14036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:18:22.737243   14036 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:18:22.737252   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:22.785192   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:23.493831   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:23.616417   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:23.664597   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:23.714856   14036 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:18:23.714918   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:24.224606   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:24.724651   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:24.741543   14036 api_server.go:71] duration metric: took 1.026678054s to wait for apiserver process to appear ...
	I0601 04:18:24.741573   14036 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:18:24.741609   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:24.743165   14036 api_server.go:256] stopped: https://127.0.0.1:53162/healthz: Get "https://127.0.0.1:53162/healthz": EOF
	I0601 04:18:25.399324   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:25.481380   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:25.515313   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.515325   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:25.515385   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:25.546864   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.546877   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:25.546942   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:25.582431   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.582445   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:25.582503   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:25.622691   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.622704   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:25.622766   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:25.654669   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.654682   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:25.654738   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:25.685692   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.685706   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:25.685765   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:25.719896   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.719910   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:25.719974   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:25.755042   13556 logs.go:274] 0 containers: []
	W0601 04:18:25.755058   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:25.755066   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:25.755074   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:25.815872   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:25.815883   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:25.815891   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:25.829154   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:25.829166   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:27.888157   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058957128s)
	I0601 04:18:27.888265   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:27.888293   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:27.929491   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:27.929508   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:25.243670   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:27.652029   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 04:18:27.652045   14036 api_server.go:102] status: https://127.0.0.1:53162/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 04:18:27.743312   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:27.749868   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:18:27.749888   14036 api_server.go:102] status: https://127.0.0.1:53162/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:18:28.243386   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:28.250986   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:18:28.250999   14036 api_server.go:102] status: https://127.0.0.1:53162/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:18:28.743315   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:28.749565   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:18:28.749583   14036 api_server.go:102] status: https://127.0.0.1:53162/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:18:29.243324   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:18:29.249665   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 200:
	ok
	I0601 04:18:29.256790   14036 api_server.go:140] control plane version: v1.23.6
	I0601 04:18:29.256806   14036 api_server.go:130] duration metric: took 4.515177636s to wait for apiserver health ...
	I0601 04:18:29.256812   14036 cni.go:95] Creating CNI manager for ""
	I0601 04:18:29.256817   14036 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:18:29.256824   14036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:18:29.265857   14036 system_pods.go:59] 8 kube-system pods found
	I0601 04:18:29.265875   14036 system_pods.go:61] "coredns-64897985d-89vc5" [95167d56-5dd4-4982-a6ca-86bb2e4620e3] Running
	I0601 04:18:29.265879   14036 system_pods.go:61] "etcd-no-preload-20220601041659-2342" [41190448-255a-49e9-b1e9-8ea601ad0843] Running
	I0601 04:18:29.265884   14036 system_pods.go:61] "kube-apiserver-no-preload-20220601041659-2342" [68c306bb-05ab-46ec-a523-865fe75e873a] Running
	I0601 04:18:29.265893   14036 system_pods.go:61] "kube-controller-manager-no-preload-20220601041659-2342" [e54984b5-ad07-42c7-8adc-e3d945a55efe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 04:18:29.265898   14036 system_pods.go:61] "kube-proxy-fgsgh" [bdfa1c31-6750-4343-b15b-08de66100496] Running
	I0601 04:18:29.265903   14036 system_pods.go:61] "kube-scheduler-no-preload-20220601041659-2342" [5e7b361b-cc2a-420b-83d2-0f0710b6dbd4] Running
	I0601 04:18:29.265908   14036 system_pods.go:61] "metrics-server-b955d9d8-64p54" [75ee83a8-d23f-44d3-ad4a-370743a2a88d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:18:29.265915   14036 system_pods.go:61] "storage-provisioner" [401f203f-92b1-4ae2-a59c-19909e579b9a] Running
	I0601 04:18:29.265919   14036 system_pods.go:74] duration metric: took 9.090386ms to wait for pod list to return data ...
	I0601 04:18:29.265926   14036 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:18:29.268895   14036 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:18:29.268910   14036 node_conditions.go:123] node cpu capacity is 6
	I0601 04:18:29.268921   14036 node_conditions.go:105] duration metric: took 2.991293ms to run NodePressure ...
	I0601 04:18:29.268932   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:18:29.533767   14036 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 04:18:29.539069   14036 kubeadm.go:777] kubelet initialised
	I0601 04:18:29.539090   14036 kubeadm.go:778] duration metric: took 5.30228ms waiting for restarted kubelet to initialise ...
	I0601 04:18:29.539104   14036 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:18:29.544448   14036 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-89vc5" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.550983   14036 pod_ready.go:92] pod "coredns-64897985d-89vc5" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:29.550993   14036 pod_ready.go:81] duration metric: took 6.531028ms waiting for pod "coredns-64897985d-89vc5" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.550999   14036 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.596543   14036 pod_ready.go:92] pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:29.596558   14036 pod_ready.go:81] duration metric: took 45.552599ms waiting for pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.596566   14036 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.603596   14036 pod_ready.go:92] pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:29.603609   14036 pod_ready.go:81] duration metric: took 7.03783ms waiting for pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:29.603621   14036 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:30.444730   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:30.481478   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:30.511666   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.511679   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:30.511732   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:30.542700   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.542715   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:30.542772   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:30.572035   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.572047   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:30.572104   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:30.603167   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.603179   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:30.603238   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:30.632389   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.632402   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:30.632456   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:30.660425   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.660437   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:30.660494   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:30.692427   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.692440   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:30.692498   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:30.721182   13556 logs.go:274] 0 containers: []
	W0601 04:18:30.721194   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:30.721201   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:30.721209   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:30.763615   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:30.763627   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:30.779090   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:30.779105   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:30.837839   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:30.837850   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:30.837857   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:30.851365   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:30.851379   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:32.907858   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056443603s)
	I0601 04:18:31.668923   14036 pod_ready.go:102] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:34.169366   14036 pod_ready.go:102] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:35.408111   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:35.483017   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:18:35.513087   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.513099   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:18:35.513153   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:18:35.541148   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.541161   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:18:35.541222   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:18:35.569639   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.569652   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:18:35.569708   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:18:35.599189   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.599201   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:18:35.599254   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:18:35.628983   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.628995   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:18:35.629052   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:18:35.658557   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.658569   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:18:35.658623   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:18:35.691031   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.691058   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:18:35.691174   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:18:35.721259   13556 logs.go:274] 0 containers: []
	W0601 04:18:35.721271   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:18:35.721277   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:18:35.721284   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:18:35.733301   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:18:35.733315   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:18:35.785853   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:18:35.785866   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:18:35.785872   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:18:35.799604   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:18:35.799616   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:18:37.856133   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056481616s)
	I0601 04:18:37.856244   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:18:37.856250   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:18:36.666864   14036 pod_ready.go:102] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:38.669699   14036 pod_ready.go:102] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:40.397963   13556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:18:40.408789   13556 kubeadm.go:630] restartCluster took 4m7.458583962s
	W0601 04:18:40.408865   13556 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0601 04:18:40.408881   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:18:40.824000   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:18:40.833055   13556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:18:40.846500   13556 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:18:40.846568   13556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:18:40.859653   13556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:18:40.859688   13556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:18:41.605164   13556 out.go:204]   - Generating certificates and keys ...
	I0601 04:18:42.649022   13556 out.go:204]   - Booting up control plane ...
	I0601 04:18:40.667004   14036 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:40.667016   14036 pod_ready.go:81] duration metric: took 11.063266842s waiting for pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.667023   14036 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fgsgh" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.670891   14036 pod_ready.go:92] pod "kube-proxy-fgsgh" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:40.670899   14036 pod_ready.go:81] duration metric: took 3.871243ms waiting for pod "kube-proxy-fgsgh" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.670904   14036 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.675132   14036 pod_ready.go:92] pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:18:40.675141   14036 pod_ready.go:81] duration metric: took 4.221246ms waiting for pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:40.675147   14036 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace to be "Ready" ...
	I0601 04:18:42.684353   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:44.685528   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:46.687697   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:48.688040   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:51.186606   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:53.187481   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:55.188655   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:18:57.687295   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:00.185897   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:02.686911   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:04.688219   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:07.185914   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:09.188825   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:11.688002   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:14.187662   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:16.188114   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:18.188168   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:20.188303   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:22.688347   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:25.186136   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:27.188737   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:29.685888   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:31.687374   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:33.688010   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:35.688223   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:38.186051   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:40.685861   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:43.184495   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:45.184738   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:47.188619   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:49.688777   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:52.187737   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:54.188673   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:56.685934   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:19:58.688410   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:01.185111   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:03.185648   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:05.186993   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:07.188774   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:09.189538   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:11.687713   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:13.688917   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:16.189541   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:18.689213   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:21.186825   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:23.187139   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:25.187762   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:27.687801   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:29.689029   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:32.186695   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:34.188405   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	W0601 04:20:37.567191   13556 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 04:20:37.567223   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:20:37.985183   13556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:20:37.995063   13556 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:20:37.995115   13556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:20:38.003134   13556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:20:38.003167   13556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:20:38.714980   13556 out.go:204]   - Generating certificates and keys ...
	I0601 04:20:36.688566   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:39.188270   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:39.157245   13556 out.go:204]   - Booting up control plane ...
	I0601 04:20:41.688898   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:44.185451   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:46.186959   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:48.685368   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:50.687302   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:53.186843   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:55.189047   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:57.189228   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:20:59.689548   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:02.185896   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:04.687858   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:07.189539   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:09.687226   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:11.689231   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:14.186006   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:16.188122   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:18.688041   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:20.695527   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:23.199009   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:25.203674   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:27.704456   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:30.208738   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:32.711751   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:34.714219   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:37.216949   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:39.714895   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:41.720032   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:44.217871   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:46.221799   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:48.720839   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:50.722793   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:53.221273   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:55.223691   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:57.723160   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:21:59.724889   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:02.222785   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:04.225050   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:06.723223   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:08.723519   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:10.726350   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:13.225782   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:15.229179   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:17.726361   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:20.226547   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:22.228014   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:24.725636   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:26.726651   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:29.225011   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:31.725015   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:33.726567   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:34.113538   13556 kubeadm.go:397] StartCluster complete in 8m1.165906933s
	I0601 04:22:34.113614   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 04:22:34.143687   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.143700   13556 logs.go:276] No container was found matching "kube-apiserver"
	I0601 04:22:34.143755   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 04:22:34.173703   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.173716   13556 logs.go:276] No container was found matching "etcd"
	I0601 04:22:34.173771   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 04:22:34.204244   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.204257   13556 logs.go:276] No container was found matching "coredns"
	I0601 04:22:34.204312   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 04:22:34.235759   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.235775   13556 logs.go:276] No container was found matching "kube-scheduler"
	I0601 04:22:34.235836   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 04:22:34.265295   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.265308   13556 logs.go:276] No container was found matching "kube-proxy"
	I0601 04:22:34.265362   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 04:22:34.294194   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.294207   13556 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 04:22:34.294263   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 04:22:34.323578   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.323590   13556 logs.go:276] No container was found matching "storage-provisioner"
	I0601 04:22:34.323645   13556 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 04:22:34.353103   13556 logs.go:274] 0 containers: []
	W0601 04:22:34.353115   13556 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 04:22:34.353122   13556 logs.go:123] Gathering logs for kubelet ...
	I0601 04:22:34.353128   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 04:22:34.396193   13556 logs.go:123] Gathering logs for dmesg ...
	I0601 04:22:34.396212   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 04:22:34.408612   13556 logs.go:123] Gathering logs for describe nodes ...
	I0601 04:22:34.408626   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 04:22:34.471074   13556 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 04:22:34.471086   13556 logs.go:123] Gathering logs for Docker ...
	I0601 04:22:34.471093   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 04:22:34.483079   13556 logs.go:123] Gathering logs for container status ...
	I0601 04:22:34.483090   13556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 04:22:36.538288   13556 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055125762s)
	W0601 04:22:36.538414   13556 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 04:22:36.538429   13556 out.go:239] * 
	W0601 04:22:36.538563   13556 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 04:22:36.538581   13556 out.go:239] * 
	W0601 04:22:36.539131   13556 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 04:22:36.603750   13556 out.go:177] 
	W0601 04:22:36.646990   13556 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 04:22:36.647054   13556 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 04:22:36.647091   13556 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 04:22:36.667708   13556 out.go:177] 
	I0601 04:22:36.224570   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:38.226521   14036 pod_ready.go:102] pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace has status "Ready":"False"
	I0601 04:22:40.720948   14036 pod_ready.go:81] duration metric: took 4m0.004760286s waiting for pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace to be "Ready" ...
	E0601 04:22:40.720960   14036 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-64p54" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 04:22:40.720971   14036 pod_ready.go:38] duration metric: took 4m11.140717277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:22:40.720995   14036 kubeadm.go:630] restartCluster took 4m21.223355239s
	W0601 04:22:40.721068   14036 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 04:22:40.721085   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:23:19.206218   14036 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.484454591s)
	I0601 04:23:19.206280   14036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:23:19.216115   14036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:23:19.224292   14036 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:23:19.224335   14036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:23:19.231888   14036 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:23:19.231915   14036 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:23:19.732100   14036 out.go:204]   - Generating certificates and keys ...
	I0601 04:23:20.637005   14036 out.go:204]   - Booting up control plane ...
	I0601 04:23:27.184771   14036 out.go:204]   - Configuring RBAC rules ...
	I0601 04:23:27.636872   14036 cni.go:95] Creating CNI manager for ""
	I0601 04:23:27.636887   14036 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:23:27.636907   14036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 04:23:27.636996   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=no-preload-20220601041659-2342 minikube.k8s.io/updated_at=2022_06_01T04_23_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:27.637021   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:27.652079   14036 ops.go:34] apiserver oom_adj: -16
	I0601 04:23:27.830215   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:28.430751   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:28.930740   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:29.431114   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:29.930775   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:30.430813   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:30.931405   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:31.431219   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:31.931379   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:32.432223   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:32.931433   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:33.432954   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:33.931528   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:34.430840   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:34.930779   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:35.431436   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:35.931212   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:36.431621   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:36.932267   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:37.430837   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:37.932483   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:38.433048   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:38.931387   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:39.432562   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:39.930919   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:40.431610   14036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:23:40.485985   14036 kubeadm.go:1045] duration metric: took 12.84888152s to wait for elevateKubeSystemPrivileges.
	I0601 04:23:40.486001   14036 kubeadm.go:397] StartCluster complete in 5m21.025467082s
	I0601 04:23:40.486017   14036 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:23:40.486096   14036 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:23:40.486639   14036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:23:41.002750   14036 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220601041659-2342" rescaled to 1
	I0601 04:23:41.002794   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:23:41.002822   14036 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 04:23:41.002790   14036 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:23:41.024225   14036 out.go:177] * Verifying Kubernetes components...
	I0601 04:23:41.002870   14036 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220601041659-2342"
	I0601 04:23:41.002871   14036 addons.go:65] Setting metrics-server=true in profile "no-preload-20220601041659-2342"
	I0601 04:23:41.002882   14036 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220601041659-2342"
	I0601 04:23:41.045170   14036 addons.go:153] Setting addon metrics-server=true in "no-preload-20220601041659-2342"
	W0601 04:23:41.045190   14036 addons.go:165] addon metrics-server should already be in state true
	I0601 04:23:41.002905   14036 addons.go:65] Setting dashboard=true in profile "no-preload-20220601041659-2342"
	I0601 04:23:41.045204   14036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:23:41.045206   14036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220601041659-2342"
	I0601 04:23:41.045224   14036 host.go:66] Checking if "no-preload-20220601041659-2342" exists ...
	I0601 04:23:41.002991   14036 config.go:178] Loaded profile config "no-preload-20220601041659-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:23:41.045243   14036 addons.go:153] Setting addon dashboard=true in "no-preload-20220601041659-2342"
	W0601 04:23:41.045258   14036 addons.go:165] addon dashboard should already be in state true
	I0601 04:23:41.024274   14036 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220601041659-2342"
	W0601 04:23:41.045272   14036 addons.go:165] addon storage-provisioner should already be in state true
	I0601 04:23:41.045301   14036 host.go:66] Checking if "no-preload-20220601041659-2342" exists ...
	I0601 04:23:41.045313   14036 host.go:66] Checking if "no-preload-20220601041659-2342" exists ...
	I0601 04:23:41.045537   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:23:41.045591   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:23:41.045632   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:23:41.045667   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:23:41.060815   14036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 04:23:41.063469   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:23:41.193932   14036 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 04:23:41.175016   14036 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220601041659-2342"
	I0601 04:23:41.229941   14036 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	W0601 04:23:41.229964   14036 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:23:41.267107   14036 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 04:23:41.283785   14036 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220601041659-2342" to be "Ready" ...
	I0601 04:23:41.304029   14036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 04:23:41.304040   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 04:23:41.304068   14036 host.go:66] Checking if "no-preload-20220601041659-2342" exists ...
	I0601 04:23:41.310158   14036 node_ready.go:49] node "no-preload-20220601041659-2342" has status "Ready":"True"
	I0601 04:23:41.378018   14036 node_ready.go:38] duration metric: took 73.984801ms waiting for node "no-preload-20220601041659-2342" to be "Ready" ...
	I0601 04:23:41.378031   14036 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:23:41.341073   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:23:41.341078   14036 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:23:41.378079   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:23:41.415181   14036 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 04:23:41.341460   14036 cli_runner.go:164] Run: docker container inspect no-preload-20220601041659-2342 --format={{.State.Status}}
	I0601 04:23:41.383871   14036 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-4th8d" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:41.415313   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:23:41.451975   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 04:23:41.451990   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 04:23:41.452058   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:23:41.476653   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:23:41.567212   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:23:41.567687   14036 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:23:41.567699   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:23:41.567752   14036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601041659-2342
	I0601 04:23:41.571152   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:23:41.641726   14036 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 04:23:41.641740   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 04:23:41.653203   14036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53163 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601041659-2342/id_rsa Username:docker}
	I0601 04:23:41.729747   14036 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 04:23:41.729770   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 04:23:41.748626   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 04:23:41.748639   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 04:23:41.813442   14036 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:23:41.813470   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 04:23:41.835895   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 04:23:41.835912   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 04:23:41.838988   14036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:23:41.839581   14036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:23:41.918717   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 04:23:41.918736   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 04:23:41.946533   14036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:23:42.039688   14036 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 04:23:42.043776   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 04:23:42.043789   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 04:23:42.144037   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 04:23:42.144061   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 04:23:42.247696   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 04:23:42.247710   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 04:23:42.325940   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 04:23:42.325956   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 04:23:42.353302   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 04:23:42.353328   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 04:23:42.450282   14036 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:23:42.450297   14036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 04:23:42.617845   14036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:23:42.836668   14036 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220601041659-2342"
	I0601 04:23:43.469737   14036 pod_ready.go:92] pod "coredns-64897985d-4th8d" in "kube-system" namespace has status "Ready":"True"
	I0601 04:23:43.469754   14036 pod_ready.go:81] duration metric: took 2.017769245s waiting for pod "coredns-64897985d-4th8d" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.469762   14036 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.479814   14036 pod_ready.go:92] pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:23:43.479825   14036 pod_ready.go:81] duration metric: took 10.057047ms waiting for pod "etcd-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.479832   14036 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.487495   14036 pod_ready.go:92] pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:23:43.487520   14036 pod_ready.go:81] duration metric: took 7.682581ms waiting for pod "kube-apiserver-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.487547   14036 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.494534   14036 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:23:43.494546   14036 pod_ready.go:81] duration metric: took 6.994344ms waiting for pod "kube-controller-manager-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.494568   14036 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ff67" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.501300   14036 pod_ready.go:92] pod "kube-proxy-7ff67" in "kube-system" namespace has status "Ready":"True"
	I0601 04:23:43.501324   14036 pod_ready.go:81] duration metric: took 6.751865ms waiting for pod "kube-proxy-7ff67" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.501351   14036 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.749111   14036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.131199262s)
	I0601 04:23:43.774836   14036 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0601 04:23:43.849116   14036 addons.go:417] enableAddons completed in 2.846213477s
	I0601 04:23:43.868583   14036 pod_ready.go:92] pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:23:43.868594   14036 pod_ready.go:81] duration metric: took 367.23204ms waiting for pod "kube-scheduler-no-preload-20220601041659-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:23:43.868604   14036 pod_ready.go:38] duration metric: took 2.490525792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:23:43.868620   14036 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:23:43.868683   14036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:23:43.878538   14036 api_server.go:71] duration metric: took 2.875633694s to wait for apiserver process to appear ...
	I0601 04:23:43.878552   14036 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:23:43.878559   14036 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53162/healthz ...
	I0601 04:23:43.883418   14036 api_server.go:266] https://127.0.0.1:53162/healthz returned 200:
	ok
	I0601 04:23:43.884917   14036 api_server.go:140] control plane version: v1.23.6
	I0601 04:23:43.884924   14036 api_server.go:130] duration metric: took 6.368843ms to wait for apiserver health ...
	I0601 04:23:43.884929   14036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:23:44.070448   14036 system_pods.go:59] 8 kube-system pods found
	I0601 04:23:44.070463   14036 system_pods.go:61] "coredns-64897985d-4th8d" [1e28756e-461b-4daf-a314-201279e6f280] Running
	I0601 04:23:44.070467   14036 system_pods.go:61] "etcd-no-preload-20220601041659-2342" [f062c2b7-2336-4312-b729-118c8a26d909] Running
	I0601 04:23:44.070477   14036 system_pods.go:61] "kube-apiserver-no-preload-20220601041659-2342" [3fb9b4d8-e18d-4c5a-812d-7c0f81615e1f] Running
	I0601 04:23:44.070482   14036 system_pods.go:61] "kube-controller-manager-no-preload-20220601041659-2342" [a9257f5c-b1fb-4410-ba04-98f3f46b470f] Running
	I0601 04:23:44.070488   14036 system_pods.go:61] "kube-proxy-7ff67" [bba8c125-49b9-46a6-bd15-66a15ed18932] Running
	I0601 04:23:44.070491   14036 system_pods.go:61] "kube-scheduler-no-preload-20220601041659-2342" [d2b3afb0-7805-466b-89d0-8bf20f418464] Running
	I0601 04:23:44.070501   14036 system_pods.go:61] "metrics-server-b955d9d8-dspp8" [da6693e5-ac7d-49f9-8894-8b27a22ee111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:23:44.070505   14036 system_pods.go:61] "storage-provisioner" [164a6111-4f51-4058-bfd8-8e81ce03ab6f] Running
	I0601 04:23:44.070509   14036 system_pods.go:74] duration metric: took 185.574848ms to wait for pod list to return data ...
	I0601 04:23:44.070514   14036 default_sa.go:34] waiting for default service account to be created ...
	I0601 04:23:44.266163   14036 default_sa.go:45] found service account: "default"
	I0601 04:23:44.266174   14036 default_sa.go:55] duration metric: took 195.654314ms for default service account to be created ...
	I0601 04:23:44.266179   14036 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 04:23:44.469975   14036 system_pods.go:86] 8 kube-system pods found
	I0601 04:23:44.469988   14036 system_pods.go:89] "coredns-64897985d-4th8d" [1e28756e-461b-4daf-a314-201279e6f280] Running
	I0601 04:23:44.469992   14036 system_pods.go:89] "etcd-no-preload-20220601041659-2342" [f062c2b7-2336-4312-b729-118c8a26d909] Running
	I0601 04:23:44.469996   14036 system_pods.go:89] "kube-apiserver-no-preload-20220601041659-2342" [3fb9b4d8-e18d-4c5a-812d-7c0f81615e1f] Running
	I0601 04:23:44.470000   14036 system_pods.go:89] "kube-controller-manager-no-preload-20220601041659-2342" [a9257f5c-b1fb-4410-ba04-98f3f46b470f] Running
	I0601 04:23:44.470003   14036 system_pods.go:89] "kube-proxy-7ff67" [bba8c125-49b9-46a6-bd15-66a15ed18932] Running
	I0601 04:23:44.470009   14036 system_pods.go:89] "kube-scheduler-no-preload-20220601041659-2342" [d2b3afb0-7805-466b-89d0-8bf20f418464] Running
	I0601 04:23:44.470015   14036 system_pods.go:89] "metrics-server-b955d9d8-dspp8" [da6693e5-ac7d-49f9-8894-8b27a22ee111] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:23:44.470020   14036 system_pods.go:89] "storage-provisioner" [164a6111-4f51-4058-bfd8-8e81ce03ab6f] Running
	I0601 04:23:44.470024   14036 system_pods.go:126] duration metric: took 203.839765ms to wait for k8s-apps to be running ...
	I0601 04:23:44.470029   14036 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 04:23:44.470075   14036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:23:44.479988   14036 system_svc.go:56] duration metric: took 9.950615ms WaitForService to wait for kubelet.
	I0601 04:23:44.480003   14036 kubeadm.go:572] duration metric: took 3.477092877s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 04:23:44.480018   14036 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:23:44.669627   14036 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:23:44.669642   14036 node_conditions.go:123] node cpu capacity is 6
	I0601 04:23:44.669651   14036 node_conditions.go:105] duration metric: took 189.627547ms to run NodePressure ...
	I0601 04:23:44.669660   14036 start.go:213] waiting for startup goroutines ...
	I0601 04:23:44.700018   14036 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 04:23:44.722598   14036 out.go:177] * Done! kubectl is now configured to use "no-preload-20220601041659-2342" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:18:16 UTC, end at Wed 2022-06-01 11:24:48 UTC. --
	Jun 01 11:22:57 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:22:57.662315455Z" level=info msg="ignoring event" container=01045ae278a204c9119c01c95bc94c433879d2e92125b6f492a9f8760753b48b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:07 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:07.753645699Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=5d7c2e3d72fe9009f71d9e49b25a461f6b111a7657c8dd2cc6a4815728e598af
	Jun 01 11:23:07 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:07.808804529Z" level=info msg="ignoring event" container=5d7c2e3d72fe9009f71d9e49b25a461f6b111a7657c8dd2cc6a4815728e598af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:17 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:17.877612440Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=57fbfccc998db276eb09e3aa6df866227f86fa1043e28783206a47098ab8d1e2
	Jun 01 11:23:17 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:17.906174016Z" level=info msg="ignoring event" container=57fbfccc998db276eb09e3aa6df866227f86fa1043e28783206a47098ab8d1e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:18 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:18.010931680Z" level=info msg="ignoring event" container=254cea236d30b950c485b96f520899815cb7a6d570a78a6df40a5de4c927dda3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:18 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:18.114730603Z" level=info msg="ignoring event" container=1e33579088f2c1c8a8c88d5587b71940c8bf92928270f8fc2cb07ef429c4bc74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:18 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:18.219816478Z" level=info msg="ignoring event" container=57466844373b9d83cf886e94d47f7e9309ae7b4668f8b9659a9c571a7120b0de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:18 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:18.351475023Z" level=info msg="ignoring event" container=3f714de6344bb4a29f0e59684a9b1eb148ed6cb07dbef1c1c67db88eef097faf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:43 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:43.511008128Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:23:43 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:43.511075446Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:23:43 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:43.512680720Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:23:45 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:45.612898249Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 11:23:45 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:45.821166177Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 11:23:49 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:49.063495338Z" level=info msg="ignoring event" container=b2efd81fcfbf952661f88b51f3ff217a47410f4da8aa8affe391906827b571ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:23:49 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:49.079413840Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	Jun 01 11:23:49 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:23:49.650068343Z" level=info msg="ignoring event" container=f5becf26faa418f08d0eb667b16f1da433ffedaddc5edee88000ca9b67f25f0e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:24:00 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:24:00.852203954Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:24:00 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:24:00.852337102Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:24:00 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:24:00.853615723Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:24:04 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:24:04.948083924Z" level=info msg="ignoring event" container=da95c41640e4443a86eadcfe89569bc1e020b9315adf526c8addb12e0852f358 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:24:46 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:24:46.069352760Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:24:46 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:24:46.069858859Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:24:46 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:24:46.073057273Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:24:46 no-preload-20220601041659-2342 dockerd[131]: time="2022-06-01T11:24:46.075678498Z" level=info msg="ignoring event" container=09fac6c911ec3cdcab4776a9c24cda7c68d7e5d90dfc326bf49cbc4f7705f146 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	09fac6c911ec3       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   3                   64d304f23d052
	d096a82b3ce0e       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   55 seconds ago       Running             kubernetes-dashboard        0                   ab25ec0d8943c
	f288409ac9503       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   72832b818d097
	8ec6fde60edd1       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   5a550f203ec45
	f66543b57764b       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   5b1302cc8d6f7
	d2b9c51db2b2b       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   ad72a05d02460
	64c417b589d52       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   1f70d997cffdb
	ac52a9b775d7d       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   5bc8932ec5c76
	10f840b7399c6       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   e30a00d0dbfb0
	
	* 
	* ==> coredns [8ec6fde60edd] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220601041659-2342
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220601041659-2342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=no-preload-20220601041659-2342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T04_23_27_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:23:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220601041659-2342
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:24:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:24:42 +0000   Wed, 01 Jun 2022 11:24:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:24:42 +0000   Wed, 01 Jun 2022 11:24:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:24:42 +0000   Wed, 01 Jun 2022 11:24:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 11:24:42 +0000   Wed, 01 Jun 2022 11:24:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    no-preload-20220601041659-2342
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                e23cd456-aaaa-4c54-8dbe-cf17db0b9e1d
	  Boot ID:                    f65ff030-0ce1-451f-b056-a175624cc17c
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-4th8d                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     69s
	  kube-system                 etcd-no-preload-20220601041659-2342                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                 kube-apiserver-no-preload-20220601041659-2342             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-controller-manager-no-preload-20220601041659-2342    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-proxy-7ff67                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-no-preload-20220601041659-2342             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 metrics-server-b955d9d8-dspp8                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         67s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-97rct                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-jxm74                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 66s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    88s (x4 over 88s)  kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x4 over 88s)  kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  88s (x5 over 88s)  kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasSufficientMemory
	  Normal  Starting                 82s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s                kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s                kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s                kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                72s                kubelet     Node no-preload-20220601041659-2342 status is now: NodeReady
	  Normal  Starting                 8s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x2 over 8s)    kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x2 over 8s)    kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x2 over 8s)    kubelet     Node no-preload-20220601041659-2342 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s                 kubelet     Node no-preload-20220601041659-2342 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s                 kubelet     Node no-preload-20220601041659-2342 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [ac52a9b775d7] <==
	* {"level":"info","ts":"2022-06-01T11:23:22.147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-01T11:23:22.147Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-01T11:23:22.148Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:23:22.148Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:23:22.148Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:23:22.148Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:23:22.148Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:23:22.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:23:22.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:23:22.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:23:22.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:23:22.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:23:22.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:23:22.742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:23:22.742Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:no-preload-20220601041659-2342 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:23:22.743Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:23:22.744Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:23:22.744Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:24:49 up  1:05,  0 users,  load average: 0.73, 0.75, 0.89
	Linux no-preload-20220601041659-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [64c417b589d5] <==
	* I0601 11:23:25.898650       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:23:25.921086       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:23:25.987476       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:23:25.991194       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0601 11:23:25.991837       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:23:25.994422       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:23:26.779298       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:23:27.418634       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:23:27.426680       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:23:27.436953       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:23:27.593387       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:23:40.071234       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:23:40.532927       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:23:42.251826       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:23:42.843000       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.105.218.124]
	W0601 11:23:43.653241       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:23:43.653292       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:23:43.653298       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:23:43.751938       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.110.187.56]
	I0601 11:23:43.760773       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.99.182.160]
	W0601 11:24:43.611862       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:24:43.611964       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:24:43.611990       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [10f840b7399c] <==
	* I0601 11:23:43.622822       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	E0601 11:23:43.631680       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:23:43.635642       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:23:43.635854       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:23:43.635868       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:23:43.642464       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:23:43.643223       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:23:43.643583       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:23:43.651782       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:23:43.652041       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:23:43.659606       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-jxm74"
	I0601 11:23:43.672056       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-97rct"
	E0601 11:24:41.338577       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0601 11:24:41.388194       1 event.go:294] "Event occurred" object="no-preload-20220601041659-2342" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node no-preload-20220601041659-2342 status is now: NodeNotReady"
	I0601 11:24:41.397463       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-no-preload-20220601041659-2342" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	W0601 11:24:41.397982       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0601 11:24:41.402750       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-no-preload-20220601041659-2342" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:41.410640       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d-4th8d" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:41.417418       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77-jxm74" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:41.501273       1 event.go:294] "Event occurred" object="kube-system/etcd-no-preload-20220601041659-2342" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:41.513971       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-7ff67" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:41.519888       1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:41.525501       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 11:24:41.525626       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-no-preload-20220601041659-2342" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 11:24:46.505122       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [f66543b57764] <==
	* I0601 11:23:42.141364       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:23:42.141434       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:23:42.141572       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:23:42.246760       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:23:42.246832       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:23:42.246838       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:23:42.246847       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:23:42.247168       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:23:42.249477       1 config.go:317] "Starting service config controller"
	I0601 11:23:42.249540       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:23:42.249578       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:23:42.249582       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:23:42.350230       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:23:42.350261       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [d2b9c51db2b2] <==
	* W0601 11:23:24.738669       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:23:24.738700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:23:24.738814       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:23:24.738935       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:23:24.738875       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:23:24.738998       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:23:24.739104       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:23:24.739849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:23:24.739083       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 11:23:24.739885       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 11:23:25.558895       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:23:25.558945       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:23:25.613199       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:23:25.613237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:23:25.613904       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:23:25.613936       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:23:25.623358       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:23:25.623392       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:23:25.660804       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 11:23:25.660842       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:23:25.736301       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:23:25.736340       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 11:23:25.895320       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:23:25.895338       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:23:27.732245       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:18:16 UTC, end at Wed 2022-06-01 11:24:50 UTC. --
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.097917    7101 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wwv2\" (UniqueName: \"kubernetes.io/projected/1e28756e-461b-4daf-a314-201279e6f280-kube-api-access-7wwv2\") pod \"coredns-64897985d-4th8d\" (UID: \"1e28756e-461b-4daf-a314-201279e6f280\") " pod="kube-system/coredns-64897985d-4th8d"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:43.098021    7101 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:43.454431    7101 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220601041659-2342\" already exists" pod="kube-system/kube-scheduler-no-preload-20220601041659-2342"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:43.634926    7101 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220601041659-2342\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220601041659-2342"
	Jun 01 11:24:43 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:43.834724    7101 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220601041659-2342\" already exists" pod="kube-system/etcd-no-preload-20220601041659-2342"
	Jun 01 11:24:44 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:44.035239    7101 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220601041659-2342\" already exists" pod="kube-system/kube-apiserver-no-preload-20220601041659-2342"
	Jun 01 11:24:44 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:44.199867    7101 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:24:44 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:44.200055    7101 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bba8c125-49b9-46a6-bd15-66a15ed18932-kube-proxy podName:bba8c125-49b9-46a6-bd15-66a15ed18932 nodeName:}" failed. No retries permitted until 2022-06-01 11:24:44.700028949 +0000 UTC m=+3.062232882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/bba8c125-49b9-46a6-bd15-66a15ed18932-kube-proxy") pod "kube-proxy-7ff67" (UID: "bba8c125-49b9-46a6-bd15-66a15ed18932") : failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:24:44 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:44.199877    7101 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:24:44 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:44.200519    7101 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1e28756e-461b-4daf-a314-201279e6f280-config-volume podName:1e28756e-461b-4daf-a314-201279e6f280 nodeName:}" failed. No retries permitted until 2022-06-01 11:24:44.700497237 +0000 UTC m=+3.062701168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1e28756e-461b-4daf-a314-201279e6f280-config-volume") pod "coredns-64897985d-4th8d" (UID: "1e28756e-461b-4daf-a314-201279e6f280") : failed to sync configmap cache: timed out waiting for the condition
	Jun 01 11:24:44 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:44.229922    7101 request.go:665] Waited for 1.195398024s due to client-side throttling, not priority and fairness, request: GET:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)coredns&limit=500&resourceVersion=0
	Jun 01 11:24:45 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:45.713621    7101 scope.go:110] "RemoveContainer" containerID="da95c41640e4443a86eadcfe89569bc1e020b9315adf526c8addb12e0852f358"
	Jun 01 11:24:46 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:46.024732    7101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-97rct through plugin: invalid network status for"
	Jun 01 11:24:46 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:46.074049    7101 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 11:24:46 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:46.074087    7101 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 11:24:46 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:46.074206    7101 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dn6w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHan
dler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-dspp8_kube-system(da6693e5-ac7d-49f9-8894-8b27a22ee111): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 01 11:24:46 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:46.074233    7101 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-dspp8" podUID=da6693e5-ac7d-49f9-8894-8b27a22ee111
	Jun 01 11:24:47 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:47.058896    7101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-97rct through plugin: invalid network status for"
	Jun 01 11:24:47 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:47.063121    7101 scope.go:110] "RemoveContainer" containerID="da95c41640e4443a86eadcfe89569bc1e020b9315adf526c8addb12e0852f358"
	Jun 01 11:24:47 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:47.063312    7101 scope.go:110] "RemoveContainer" containerID="09fac6c911ec3cdcab4776a9c24cda7c68d7e5d90dfc326bf49cbc4f7705f146"
	Jun 01 11:24:47 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:47.063466    7101 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-97rct_kubernetes-dashboard(10921038-abaf-4aac-94d4-5d91d28cb902)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-97rct" podUID=10921038-abaf-4aac-94d4-5d91d28cb902
	Jun 01 11:24:48 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:48.043597    7101 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Jun 01 11:24:48 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:48.070192    7101 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-97rct through plugin: invalid network status for"
	Jun 01 11:24:48 no-preload-20220601041659-2342 kubelet[7101]: I0601 11:24:48.433555    7101 scope.go:110] "RemoveContainer" containerID="09fac6c911ec3cdcab4776a9c24cda7c68d7e5d90dfc326bf49cbc4f7705f146"
	Jun 01 11:24:48 no-preload-20220601041659-2342 kubelet[7101]: E0601 11:24:48.433898    7101 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-97rct_kubernetes-dashboard(10921038-abaf-4aac-94d4-5d91d28cb902)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-97rct" podUID=10921038-abaf-4aac-94d4-5d91d28cb902
	
	* 
	* ==> kubernetes-dashboard [d096a82b3ce0] <==
	* 2022/06/01 11:23:54 Using namespace: kubernetes-dashboard
	2022/06/01 11:23:54 Using in-cluster config to connect to apiserver
	2022/06/01 11:23:54 Using secret token for csrf signing
	2022/06/01 11:23:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/01 11:23:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/01 11:23:54 Successful initial request to the apiserver, version: v1.23.6
	2022/06/01 11:23:54 Generating JWE encryption key
	2022/06/01 11:23:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/01 11:23:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/01 11:23:54 Initializing JWE encryption key from synchronized object
	2022/06/01 11:23:54 Creating in-cluster Sidecar client
	2022/06/01 11:23:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 11:23:54 Serving insecurely on HTTP port: 9090
	2022/06/01 11:24:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 11:23:54 Starting overwatch
	
	* 
	* ==> storage-provisioner [f288409ac950] <==
	* I0601 11:23:43.258660       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 11:23:43.270224       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 11:23:43.270277       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 11:23:43.275536       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 11:23:43.275771       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220601041659-2342_c96fd14b-cab0-4ca2-826a-3b61b8568c60!
	I0601 11:23:43.276369       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f0eb7c0-4c28-4a00-817e-70070eb6ac8d", APIVersion:"v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220601041659-2342_c96fd14b-cab0-4ca2-826a-3b61b8568c60 became leader
	I0601 11:23:43.376685       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220601041659-2342_c96fd14b-cab0-4ca2-826a-3b61b8568c60!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220601041659-2342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-dspp8
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220601041659-2342 describe pod metrics-server-b955d9d8-dspp8
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220601041659-2342 describe pod metrics-server-b955d9d8-dspp8: exit status 1 (276.080613ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-dspp8" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220601041659-2342 describe pod metrics-server-b955d9d8-dspp8: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (44.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (44.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220601042455-2342 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342: exit status 2 (16.114354167s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342: exit status 2 (16.140397132s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220601042455-2342 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220601042455-2342 --alsologtostderr -v=1: (1.159960112s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601042455-2342
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220601042455-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40099052bd124fb15fc74243debff5ad9de0413925ae8cccd043e504ccfc09b5",
	        "Created": "2022-06-01T11:25:02.103923004Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251334,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:26:01.823416021Z",
	            "FinishedAt": "2022-06-01T11:25:59.819385515Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/40099052bd124fb15fc74243debff5ad9de0413925ae8cccd043e504ccfc09b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40099052bd124fb15fc74243debff5ad9de0413925ae8cccd043e504ccfc09b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/40099052bd124fb15fc74243debff5ad9de0413925ae8cccd043e504ccfc09b5/hosts",
	        "LogPath": "/var/lib/docker/containers/40099052bd124fb15fc74243debff5ad9de0413925ae8cccd043e504ccfc09b5/40099052bd124fb15fc74243debff5ad9de0413925ae8cccd043e504ccfc09b5-json.log",
	        "Name": "/default-k8s-different-port-20220601042455-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220601042455-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220601042455-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b349ab858f245919ce51dffccff79d5e5c946cdb6c4a63e21c99311da4d8c9ec-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b349ab858f245919ce51dffccff79d5e5c946cdb6c4a63e21c99311da4d8c9ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b349ab858f245919ce51dffccff79d5e5c946cdb6c4a63e21c99311da4d8c9ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b349ab858f245919ce51dffccff79d5e5c946cdb6c4a63e21c99311da4d8c9ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220601042455-2342",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220601042455-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220601042455-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220601042455-2342",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220601042455-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4be6775a93b8621d58014a5ecfab0be854567a196359d6edc6c06b5055706665",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54219"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54220"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54221"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54222"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54223"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4be6775a93b8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220601042455-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "40099052bd12",
	                        "default-k8s-different-port-20220601042455-2342"
	                    ],
	                    "NetworkID": "5f3c12cd19ff58d51e01757fec6b82d20b68f4cb21bd8ce16a3e44d0d6a0e4a2",
	                    "EndpointID": "5a96a10cec056260adf74fde055f6a475a03205be170a18adc34b9e8f7adce93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220601042455-2342 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220601042455-2342 logs -n 25: (3.13664548s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | embed-certs-20220601040915-2342                   | embed-certs-20220601040915-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601040915-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                                |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601040915-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:17 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601040844-2342               | old-k8s-version-20220601040844-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:22 PDT | 01 Jun 22 04:22 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:23 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | no-preload-20220601041659-2342                    | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | no-preload-20220601041659-2342                    | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:25 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:25 PDT | 01 Jun 22 04:25 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:25 PDT | 01 Jun 22 04:26 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:26 PDT | 01 Jun 22 04:26 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:26 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:31 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:31 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601040844-2342               | old-k8s-version-20220601040844-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:26:00
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:26:00.480154   14580 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:26:00.480367   14580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:26:00.480372   14580 out.go:309] Setting ErrFile to fd 2...
	I0601 04:26:00.480376   14580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:26:00.480472   14580 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:26:00.480724   14580 out.go:303] Setting JSON to false
	I0601 04:26:00.495972   14580 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5130,"bootTime":1654077630,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:26:00.496148   14580 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:26:00.518702   14580 out.go:177] * [default-k8s-different-port-20220601042455-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:26:00.540230   14580 notify.go:193] Checking for updates...
	I0601 04:26:00.562166   14580 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:26:00.584188   14580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:26:00.606131   14580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:26:00.628160   14580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:26:00.649242   14580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:26:00.671625   14580 config.go:178] Loaded profile config "default-k8s-different-port-20220601042455-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:26:00.672286   14580 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:26:00.744341   14580 docker.go:137] docker version: linux-20.10.14
	I0601 04:26:00.744470   14580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:26:00.876577   14580 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:26:00.816467951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:26:00.952426   14580 out.go:177] * Using the docker driver based on existing profile
	I0601 04:26:00.974167   14580 start.go:284] selected driver: docker
	I0601 04:26:00.974245   14580 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601042455-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220601042455-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:26:00.974396   14580 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:26:00.977629   14580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:26:01.109165   14580 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:26:01.046549265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:26:01.109394   14580 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 04:26:01.109422   14580 cni.go:95] Creating CNI manager for ""
	I0601 04:26:01.109436   14580 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:26:01.109476   14580 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601042455-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601042455-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:26:01.131366   14580 out.go:177] * Starting control plane node default-k8s-different-port-20220601042455-2342 in cluster default-k8s-different-port-20220601042455-2342
	I0601 04:26:01.152261   14580 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:26:01.174273   14580 out.go:177] * Pulling base image ...
	I0601 04:26:01.217266   14580 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:26:01.217322   14580 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:26:01.217364   14580 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 04:26:01.217394   14580 cache.go:57] Caching tarball of preloaded images
	I0601 04:26:01.217602   14580 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 04:26:01.217630   14580 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 04:26:01.218950   14580 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/config.json ...
	I0601 04:26:01.292480   14580 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:26:01.292500   14580 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:26:01.292539   14580 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:26:01.292604   14580 start.go:352] acquiring machines lock for default-k8s-different-port-20220601042455-2342: {Name:mk23c69651775934f6906af797d469ba81c716b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:26:01.292726   14580 start.go:356] acquired machines lock for "default-k8s-different-port-20220601042455-2342" in 86.12µs
	I0601 04:26:01.292762   14580 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:26:01.292771   14580 fix.go:55] fixHost starting: 
	I0601 04:26:01.293035   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:26:01.364849   14580 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601042455-2342: state=Stopped err=<nil>
	W0601 04:26:01.364882   14580 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:26:01.386819   14580 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601042455-2342" ...
	I0601 04:26:01.408496   14580 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601042455-2342
	I0601 04:26:01.820280   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:26:01.895506   14580 kic.go:416] container "default-k8s-different-port-20220601042455-2342" state is running.
	I0601 04:26:01.896465   14580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601042455-2342
	I0601 04:26:01.978243   14580 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/config.json ...
	I0601 04:26:01.978645   14580 machine.go:88] provisioning docker machine ...
	I0601 04:26:01.978666   14580 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601042455-2342"
	I0601 04:26:01.978721   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.057577   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:02.057774   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:02.057800   14580 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601042455-2342 && echo "default-k8s-different-port-20220601042455-2342" | sudo tee /etc/hostname
	I0601 04:26:02.186984   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601042455-2342
	
	I0601 04:26:02.187091   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.266807   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:02.267070   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:02.267086   14580 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601042455-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601042455-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601042455-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:26:02.391579   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:26:02.391598   14580 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:26:02.391622   14580 ubuntu.go:177] setting up certificates
	I0601 04:26:02.391631   14580 provision.go:83] configureAuth start
	I0601 04:26:02.391694   14580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.468272   14580 provision.go:138] copyHostCerts
	I0601 04:26:02.468364   14580 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:26:02.468374   14580 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:26:02.468462   14580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:26:02.468675   14580 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:26:02.468685   14580 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:26:02.468744   14580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:26:02.468926   14580 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:26:02.468932   14580 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:26:02.468992   14580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:26:02.469123   14580 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601042455-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601042455-2342]
	I0601 04:26:02.628033   14580 provision.go:172] copyRemoteCerts
	I0601 04:26:02.628108   14580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:26:02.628154   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.702757   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:02.788602   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 04:26:02.808975   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:26:02.827761   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 04:26:02.845233   14580 provision.go:86] duration metric: configureAuth took 453.583438ms
	I0601 04:26:02.845253   14580 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:26:02.845415   14580 config.go:178] Loaded profile config "default-k8s-different-port-20220601042455-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:26:02.845498   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.918198   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:02.918337   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:02.918348   14580 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:26:03.037204   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:26:03.037224   14580 ubuntu.go:71] root file system type: overlay
	I0601 04:26:03.037352   14580 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:26:03.037443   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.111170   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:03.111313   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:03.111366   14580 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:26:03.240246   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:26:03.240328   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.313142   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:03.313309   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:03.313322   14580 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:26:03.436245   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:26:03.436261   14580 machine.go:91] provisioned docker machine in 1.457588188s
	I0601 04:26:03.436271   14580 start.go:306] post-start starting for "default-k8s-different-port-20220601042455-2342" (driver="docker")
	I0601 04:26:03.436277   14580 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:26:03.436331   14580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:26:03.436382   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.508767   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:03.596983   14580 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:26:03.600488   14580 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:26:03.600504   14580 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:26:03.600511   14580 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:26:03.600516   14580 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:26:03.600524   14580 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:26:03.600622   14580 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:26:03.600753   14580 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:26:03.600906   14580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:26:03.607953   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:26:03.625477   14580 start.go:309] post-start completed in 189.194706ms
	I0601 04:26:03.625551   14580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:26:03.625594   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.698770   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:03.782184   14580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:26:03.787963   14580 fix.go:57] fixHost completed within 2.49516055s
	I0601 04:26:03.787975   14580 start.go:81] releasing machines lock for "default-k8s-different-port-20220601042455-2342", held for 2.495210006s
	I0601 04:26:03.788048   14580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.861755   14580 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:26:03.861770   14580 ssh_runner.go:195] Run: systemctl --version
	I0601 04:26:03.861826   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.861844   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.941385   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:03.943609   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:04.159115   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:26:04.170967   14580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:26:04.180530   14580 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:26:04.180584   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:26:04.190099   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:26:04.203627   14580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:26:04.276680   14580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:26:04.346367   14580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:26:04.356550   14580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:26:04.429345   14580 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:26:04.438999   14580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:26:04.474032   14580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:26:04.561227   14580 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 04:26:04.561347   14580 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220601042455-2342 dig +short host.docker.internal
	I0601 04:26:04.700636   14580 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:26:04.700876   14580 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:26:04.705714   14580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:26:04.715234   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:04.787381   14580 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:26:04.787444   14580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:26:04.820611   14580 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 04:26:04.820628   14580 docker.go:541] Images already preloaded, skipping extraction
	I0601 04:26:04.820702   14580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:26:04.851433   14580 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 04:26:04.851452   14580 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:26:04.851529   14580 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:26:04.924277   14580 cni.go:95] Creating CNI manager for ""
	I0601 04:26:04.924289   14580 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:26:04.924340   14580 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 04:26:04.924355   14580 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601042455-2342 NodeName:default-k8s-different-port-20220601042455-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 Cgroup
Driver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:26:04.924465   14580 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20220601042455-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:26:04.924586   14580 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20220601042455-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601042455-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 04:26:04.924655   14580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 04:26:04.933113   14580 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:26:04.933164   14580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:26:04.939996   14580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0601 04:26:04.953811   14580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:26:04.966780   14580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0601 04:26:04.979301   14580 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:26:04.983155   14580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:26:04.993190   14580 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342 for IP: 192.168.49.2
	I0601 04:26:04.993351   14580 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:26:04.993405   14580 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:26:04.994010   14580 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.key
	I0601 04:26:04.994228   14580 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/apiserver.key.dd3b5fb2
	I0601 04:26:04.994339   14580 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/proxy-client.key
	I0601 04:26:04.994792   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:26:04.994838   14580 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:26:04.994852   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:26:04.994897   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:26:04.994933   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:26:04.994966   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:26:04.995036   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:26:04.995574   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:26:05.012976   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 04:26:05.029562   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:26:05.046161   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 04:26:05.064012   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:26:05.081128   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:26:05.098110   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:26:05.116584   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:26:05.134436   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:26:05.152377   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:26:05.170599   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:26:05.187918   14580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:26:05.201055   14580 ssh_runner.go:195] Run: openssl version
	I0601 04:26:05.206392   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:26:05.214051   14580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:26:05.217767   14580 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:26:05.217818   14580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:26:05.222918   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:26:05.230623   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:26:05.238341   14580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:26:05.242884   14580 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:26:05.242932   14580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:26:05.248585   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:26:05.257319   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:26:05.266343   14580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:26:05.270643   14580 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:26:05.270700   14580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:26:05.276959   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:26:05.288008   14580 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601042455-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601042455-2342
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:26:05.288113   14580 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:26:05.322070   14580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:26:05.329648   14580 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:26:05.329669   14580 kubeadm.go:626] restartCluster start
	I0601 04:26:05.329722   14580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:26:05.336276   14580 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:05.336354   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:05.410961   14580 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601042455-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:26:05.411150   14580 kubeconfig.go:127] "default-k8s-different-port-20220601042455-2342" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 04:26:05.411483   14580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:26:05.412859   14580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:26:05.420833   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:05.420896   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:05.429500   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:05.631665   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:05.631833   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:05.643018   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:05.831688   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:05.831898   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:05.843153   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.030469   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.030567   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.040665   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.231798   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.231890   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.243084   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.431676   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.431826   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.443194   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.631700   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.631935   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.642686   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.830012   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.830094   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.839242   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.029660   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.029824   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.040634   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.231729   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.231868   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.241891   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.431635   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.431790   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.442906   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.631739   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.631876   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.642010   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.831673   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.831875   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.842287   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.031773   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:08.031867   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:08.042244   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.230107   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:08.230278   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:08.240940   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.431826   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:08.431938   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:08.442260   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.442271   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:08.442320   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:08.450312   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.450323   14580 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 04:26:08.450330   14580 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:26:08.450388   14580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:26:08.480503   14580 docker.go:442] Stopping containers: [65d7be1a2882 048b1bdbb6c2 2c25ac3039ad 125e0a096cf4 ab5ecc73c373 2c18d790047c 929c1f424661 dabba0ff7c28 796713528a3d 545f113ce692 86e7f6f4c99d ee398f9c81ed a9ae0036438b f295a496a4ff 35bded318b85]
	I0601 04:26:08.480580   14580 ssh_runner.go:195] Run: docker stop 65d7be1a2882 048b1bdbb6c2 2c25ac3039ad 125e0a096cf4 ab5ecc73c373 2c18d790047c 929c1f424661 dabba0ff7c28 796713528a3d 545f113ce692 86e7f6f4c99d ee398f9c81ed a9ae0036438b f295a496a4ff 35bded318b85
	I0601 04:26:08.511573   14580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:26:08.521553   14580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:26:08.529129   14580 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  1 11:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  1 11:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  1 11:25 /etc/kubernetes/scheduler.conf
	
	I0601 04:26:08.529185   14580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 04:26:08.536247   14580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 04:26:08.543227   14580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 04:26:08.550190   14580 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.550240   14580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 04:26:08.556997   14580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 04:26:08.563897   14580 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.563944   14580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 04:26:08.570777   14580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:26:08.578228   14580 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:26:08.578236   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:08.622444   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:09.237802   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:09.364391   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:09.414114   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:09.459937   14580 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:26:09.459999   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:26:09.970965   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:26:10.470355   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:26:10.972317   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:26:11.018801   14580 api_server.go:71] duration metric: took 1.55884453s to wait for apiserver process to appear ...
	I0601 04:26:11.018822   14580 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:26:11.018837   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:26:13.577321   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 04:26:13.577342   14580 api_server.go:102] status: https://127.0.0.1:54223/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 04:26:14.079514   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:26:14.087324   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:26:14.087344   14580 api_server.go:102] status: https://127.0.0.1:54223/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:26:14.577516   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:26:14.584143   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:26:14.584160   14580 api_server.go:102] status: https://127.0.0.1:54223/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:26:15.078134   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:26:15.084601   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 200:
	ok
	I0601 04:26:15.090735   14580 api_server.go:140] control plane version: v1.23.6
	I0601 04:26:15.090746   14580 api_server.go:130] duration metric: took 4.071866633s to wait for apiserver health ...
	I0601 04:26:15.090751   14580 cni.go:95] Creating CNI manager for ""
	I0601 04:26:15.090756   14580 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:26:15.090765   14580 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:26:15.097741   14580 system_pods.go:59] 8 kube-system pods found
	I0601 04:26:15.097757   14580 system_pods.go:61] "coredns-64897985d-2cwbz" [f2ee505c-7abb-468c-b82f-0639d95d3f54] Running
	I0601 04:26:15.097764   14580 system_pods.go:61] "etcd-default-k8s-different-port-20220601042455-2342" [b259b886-9d8d-48c7-aa2a-65478e01fab5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 04:26:15.097771   14580 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601042455-2342" [34bbd902-3352-4e4b-b54d-d825aa11c98a] Running
	I0601 04:26:15.097777   14580 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601042455-2342" [efd80c45-ac3d-4e6f-81fd-e7bb51b9cffa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 04:26:15.097781   14580 system_pods.go:61] "kube-proxy-5psvf" [3d2253f1-8b8f-4db0-8081-ca96df760f01] Running
	I0601 04:26:15.097787   14580 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601042455-2342" [18d03a0a-c279-4519-aff4-0601818b2b0f] Running
	I0601 04:26:15.097792   14580 system_pods.go:61] "metrics-server-b955d9d8-cb68n" [7969f4c9-b7b6-4268-bbeb-e853689361f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:26:15.097796   14580 system_pods.go:61] "storage-provisioner" [0da4c653-9101-4891-85e8-a014384c87d8] Running
	I0601 04:26:15.097800   14580 system_pods.go:74] duration metric: took 7.031251ms to wait for pod list to return data ...
	I0601 04:26:15.097806   14580 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:26:15.100523   14580 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:26:15.100537   14580 node_conditions.go:123] node cpu capacity is 6
	I0601 04:26:15.100549   14580 node_conditions.go:105] duration metric: took 2.73238ms to run NodePressure ...
	I0601 04:26:15.100560   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:15.225479   14580 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 04:26:15.230353   14580 kubeadm.go:777] kubelet initialised
	I0601 04:26:15.230363   14580 kubeadm.go:778] duration metric: took 4.871582ms waiting for restarted kubelet to initialise ...
	I0601 04:26:15.230371   14580 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:26:15.235739   14580 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-2cwbz" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:15.240544   14580 pod_ready.go:92] pod "coredns-64897985d-2cwbz" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:15.240553   14580 pod_ready.go:81] duration metric: took 4.800313ms waiting for pod "coredns-64897985d-2cwbz" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:15.240559   14580 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:17.252022   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:19.252400   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:21.252507   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:23.752885   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:25.754820   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:27.752927   14580 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.752939   14580 pod_ready.go:81] duration metric: took 12.512215332s waiting for pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.752945   14580 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.758428   14580 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.758437   14580 pod_ready.go:81] duration metric: took 5.478741ms waiting for pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.758444   14580 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.763037   14580 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.763046   14580 pod_ready.go:81] duration metric: took 4.596913ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.763053   14580 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5psvf" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.767548   14580 pod_ready.go:92] pod "kube-proxy-5psvf" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.767557   14580 pod_ready.go:81] duration metric: took 4.499795ms waiting for pod "kube-proxy-5psvf" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.767564   14580 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.771963   14580 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.771972   14580 pod_ready.go:81] duration metric: took 4.403205ms waiting for pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.771978   14580 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:30.160100   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:32.659198   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:35.158334   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:37.159528   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:39.160149   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:41.659068   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:44.157795   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:46.658518   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:49.159038   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:51.658963   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:54.158069   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:56.158969   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:58.659942   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:00.660463   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:03.160184   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:05.659156   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:08.160717   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:10.660116   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:12.660625   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:15.160199   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:17.162541   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:19.658919   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:21.660968   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:23.661128   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:26.160934   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:28.659300   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:30.659502   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:32.660480   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:35.156691   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:37.157033   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:39.157790   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:41.659953   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:44.157489   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:46.158320   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:48.158876   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:50.159391   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:52.160809   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:54.657873   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:56.658844   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:58.660860   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:01.160686   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:03.658576   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:05.660625   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:08.158369   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:10.159184   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:12.657760   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:14.659544   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:17.158299   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:19.159216   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:21.159644   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:23.659877   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:26.159865   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:28.161266   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:30.658249   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:32.659490   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:35.158008   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:37.160518   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:39.161152   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:41.660719   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:44.157806   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:46.159192   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:48.160558   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:50.661861   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:53.158863   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:55.159591   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:57.160005   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:59.660242   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:02.159089   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:04.163195   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:06.658567   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:08.661737   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:11.160153   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:13.659262   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:16.160500   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:18.659465   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:20.660803   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:22.661147   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:25.160932   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:27.659142   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:29.661942   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:32.158831   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:34.160363   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:36.162066   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:38.660230   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:40.660953   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:43.161689   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:45.660306   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:47.662797   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:50.161558   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:52.661866   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:55.162273   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:57.162318   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:59.663120   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:02.160469   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:04.161108   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:06.161957   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:08.662446   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:11.159732   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:13.161296   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:15.162016   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:17.663004   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:20.160275   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:22.162820   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:24.659704   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:26.659992   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:28.154244   14580 pod_ready.go:81] duration metric: took 4m0.379163703s waiting for pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace to be "Ready" ...
	E0601 04:30:28.154270   14580 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 04:30:28.154377   14580 pod_ready.go:38] duration metric: took 4m12.920745187s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:30:28.154419   14580 kubeadm.go:630] restartCluster took 4m22.821363871s
	W0601 04:30:28.154538   14580 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 04:30:28.154568   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:31:06.489649   14580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.334570745s)
	I0601 04:31:06.489708   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:31:06.500019   14580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:31:06.508704   14580 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:31:06.508749   14580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:31:06.516354   14580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:31:06.516381   14580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:31:07.022688   14580 out.go:204]   - Generating certificates and keys ...
	I0601 04:31:07.547628   14580 out.go:204]   - Booting up control plane ...
	I0601 04:31:14.098649   14580 out.go:204]   - Configuring RBAC rules ...
	I0601 04:31:14.472898   14580 cni.go:95] Creating CNI manager for ""
	I0601 04:31:14.472939   14580 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:31:14.472970   14580 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 04:31:14.473040   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601042455-2342 minikube.k8s.io/updated_at=2022_06_01T04_31_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:14.473054   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:14.609357   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:14.629440   14580 ops.go:34] apiserver oom_adj: -16
	I0601 04:31:15.302635   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:15.802069   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:16.301733   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:16.802238   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:17.302310   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:17.801852   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:18.301850   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:18.801983   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:19.301780   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:19.802161   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:20.301791   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:20.801992   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:21.302891   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:21.803055   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:22.301886   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:22.802271   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:23.303324   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:23.801846   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:24.302533   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:24.802000   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:25.302196   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:25.801882   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:26.302056   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:26.801937   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:27.301872   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:27.356443   14580 kubeadm.go:1045] duration metric: took 12.883292321s to wait for elevateKubeSystemPrivileges.
	I0601 04:31:27.356475   14580 kubeadm.go:397] StartCluster complete in 5m22.064331829s
	I0601 04:31:27.356499   14580 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:31:27.356584   14580 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:31:27.357174   14580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:31:27.873007   14580 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601042455-2342" rescaled to 1
	I0601 04:31:27.873045   14580 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:31:27.873075   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:31:27.916883   14580 out.go:177] * Verifying Kubernetes components...
	I0601 04:31:27.873103   14580 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 04:31:27.873265   14580 config.go:178] Loaded profile config "default-k8s-different-port-20220601042455-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:31:27.990179   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:31:27.990194   14580 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990191   14580 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990230   14580 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990256   14580 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990270   14580 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990292   14580 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990297   14580 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220601042455-2342"
	W0601 04:31:27.990313   14580 addons.go:165] addon storage-provisioner should already be in state true
	W0601 04:31:27.990319   14580 addons.go:165] addon dashboard should already be in state true
	I0601 04:31:27.990289   14580 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220601042455-2342"
	W0601 04:31:27.990347   14580 addons.go:165] addon metrics-server should already be in state true
	I0601 04:31:27.990402   14580 host.go:66] Checking if "default-k8s-different-port-20220601042455-2342" exists ...
	I0601 04:31:27.990406   14580 host.go:66] Checking if "default-k8s-different-port-20220601042455-2342" exists ...
	I0601 04:31:27.990546   14580 host.go:66] Checking if "default-k8s-different-port-20220601042455-2342" exists ...
	I0601 04:31:27.991110   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:27.991161   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:27.991193   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:27.991205   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:28.005334   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 04:31:28.019407   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.119407   14580 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:28.160009   14580 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0601 04:31:28.160022   14580 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:31:28.139235   14580 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 04:31:28.160061   14580 host.go:66] Checking if "default-k8s-different-port-20220601042455-2342" exists ...
	I0601 04:31:28.181420   14580 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:31:28.182270   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:28.222903   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:31:28.202022   14580 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 04:31:28.213807   14580 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601042455-2342" to be "Ready" ...
	I0601 04:31:28.222906   14580 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 04:31:28.222991   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.244155   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 04:31:28.244317   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.265065   14580 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 04:31:28.248658   14580 node_ready.go:49] node "default-k8s-different-port-20220601042455-2342" has status "Ready":"True"
	I0601 04:31:28.286018   14580 node_ready.go:38] duration metric: took 41.900602ms waiting for node "default-k8s-different-port-20220601042455-2342" to be "Ready" ...
	I0601 04:31:28.286061   14580 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:31:28.286111   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 04:31:28.286143   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 04:31:28.286305   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.301339   14580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-8p4v4" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:28.323212   14580 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:31:28.323230   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:31:28.323329   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.353482   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:31:28.378157   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:31:28.401554   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:31:28.425239   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:31:28.501564   14580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:31:28.598323   14580 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 04:31:28.598337   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 04:31:28.606270   14580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:31:28.608473   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 04:31:28.608492   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 04:31:28.690725   14580 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 04:31:28.690745   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 04:31:28.702169   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 04:31:28.702196   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 04:31:28.793453   14580 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:31:28.793479   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 04:31:28.799106   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 04:31:28.799130   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 04:31:28.885461   14580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:31:28.897510   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 04:31:28.897524   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 04:31:28.918188   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 04:31:28.918205   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 04:31:29.002904   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 04:31:29.002919   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 04:31:29.117220   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 04:31:29.117235   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 04:31:29.201285   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 04:31:29.201302   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 04:31:29.287703   14580 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.282311049s)
	I0601 04:31:29.287729   14580 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 04:31:29.291304   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:31:29.291322   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 04:31:29.392031   14580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:31:29.727831   14580 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:29.886057   14580 pod_ready.go:92] pod "coredns-64897985d-8p4v4" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:29.886074   14580 pod_ready.go:81] duration metric: took 1.584692102s waiting for pod "coredns-64897985d-8p4v4" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:29.886087   14580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-cb9n8" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:30.725108   14580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.333022241s)
	I0601 04:31:30.806402   14580 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 04:31:30.843522   14580 addons.go:417] enableAddons completed in 2.970381543s
	I0601 04:31:31.905757   14580 pod_ready.go:102] pod "coredns-64897985d-cb9n8" in "kube-system" namespace has status "Ready":"False"
	I0601 04:31:32.905434   14580 pod_ready.go:92] pod "coredns-64897985d-cb9n8" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.905450   14580 pod_ready.go:81] duration metric: took 3.019316909s waiting for pod "coredns-64897985d-cb9n8" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.905457   14580 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.914545   14580 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.914568   14580 pod_ready.go:81] duration metric: took 9.084073ms waiting for pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.914583   14580 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.926766   14580 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.926777   14580 pod_ready.go:81] duration metric: took 12.185589ms waiting for pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.926785   14580 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.936236   14580 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.936249   14580 pod_ready.go:81] duration metric: took 9.458235ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.936261   14580 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p7tsj" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.982358   14580 pod_ready.go:92] pod "kube-proxy-p7tsj" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.982376   14580 pod_ready.go:81] duration metric: took 46.107821ms waiting for pod "kube-proxy-p7tsj" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.982388   14580 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:33.300851   14580 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:33.300861   14580 pod_ready.go:81] duration metric: took 318.462177ms waiting for pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:33.300867   14580 pod_ready.go:38] duration metric: took 5.014691974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:31:33.300883   14580 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:31:33.300930   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:31:33.312670   14580 api_server.go:71] duration metric: took 5.439538065s to wait for apiserver process to appear ...
	I0601 04:31:33.312684   14580 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:31:33.312690   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:31:33.318481   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 200:
	ok
	I0601 04:31:33.319652   14580 api_server.go:140] control plane version: v1.23.6
	I0601 04:31:33.319662   14580 api_server.go:130] duration metric: took 6.974325ms to wait for apiserver health ...
	I0601 04:31:33.319668   14580 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:31:33.503644   14580 system_pods.go:59] 9 kube-system pods found
	I0601 04:31:33.503658   14580 system_pods.go:61] "coredns-64897985d-8p4v4" [ae0cb737-4e73-40a0-b7ca-c5fb35908ad9] Running
	I0601 04:31:33.503664   14580 system_pods.go:61] "coredns-64897985d-cb9n8" [0b71bc2a-d0ac-4d4d-9420-1422f088b267] Running
	I0601 04:31:33.503672   14580 system_pods.go:61] "etcd-default-k8s-different-port-20220601042455-2342" [d64e3142-a5a3-438a-b1dd-f8fda41cf500] Running
	I0601 04:31:33.503684   14580 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601042455-2342" [e7ebee32-6122-4fd0-8e7a-26d16cf09fd5] Running
	I0601 04:31:33.503691   14580 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601042455-2342" [736247dc-e330-4d49-a9b4-38e9f4bf2f55] Running
	I0601 04:31:33.503697   14580 system_pods.go:61] "kube-proxy-p7tsj" [4a00e2b2-3357-4d45-812e-b96583883072] Running
	I0601 04:31:33.503708   14580 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601042455-2342" [547e2d90-4aa4-4ffa-8227-7a87069bc624] Running
	I0601 04:31:33.503718   14580 system_pods.go:61] "metrics-server-b955d9d8-vqpwl" [53aca426-4c43-4abd-bbb9-ca59d11ca961] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:31:33.503726   14580 system_pods.go:61] "storage-provisioner" [eb46d9b1-266a-406d-bfa9-384a28696367] Running
	I0601 04:31:33.503737   14580 system_pods.go:74] duration metric: took 184.060787ms to wait for pod list to return data ...
	I0601 04:31:33.503746   14580 default_sa.go:34] waiting for default service account to be created ...
	I0601 04:31:33.700368   14580 default_sa.go:45] found service account: "default"
	I0601 04:31:33.700381   14580 default_sa.go:55] duration metric: took 196.626716ms for default service account to be created ...
	I0601 04:31:33.700386   14580 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 04:31:33.904017   14580 system_pods.go:86] 9 kube-system pods found
	I0601 04:31:33.904032   14580 system_pods.go:89] "coredns-64897985d-8p4v4" [ae0cb737-4e73-40a0-b7ca-c5fb35908ad9] Running
	I0601 04:31:33.904036   14580 system_pods.go:89] "coredns-64897985d-cb9n8" [0b71bc2a-d0ac-4d4d-9420-1422f088b267] Running
	I0601 04:31:33.904040   14580 system_pods.go:89] "etcd-default-k8s-different-port-20220601042455-2342" [d64e3142-a5a3-438a-b1dd-f8fda41cf500] Running
	I0601 04:31:33.904050   14580 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220601042455-2342" [e7ebee32-6122-4fd0-8e7a-26d16cf09fd5] Running
	I0601 04:31:33.904056   14580 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220601042455-2342" [736247dc-e330-4d49-a9b4-38e9f4bf2f55] Running
	I0601 04:31:33.904060   14580 system_pods.go:89] "kube-proxy-p7tsj" [4a00e2b2-3357-4d45-812e-b96583883072] Running
	I0601 04:31:33.904064   14580 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220601042455-2342" [547e2d90-4aa4-4ffa-8227-7a87069bc624] Running
	I0601 04:31:33.904069   14580 system_pods.go:89] "metrics-server-b955d9d8-vqpwl" [53aca426-4c43-4abd-bbb9-ca59d11ca961] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:31:33.904073   14580 system_pods.go:89] "storage-provisioner" [eb46d9b1-266a-406d-bfa9-384a28696367] Running
	I0601 04:31:33.904079   14580 system_pods.go:126] duration metric: took 203.685319ms to wait for k8s-apps to be running ...
	I0601 04:31:33.904101   14580 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 04:31:33.904156   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:31:33.916392   14580 system_svc.go:56] duration metric: took 12.281443ms WaitForService to wait for kubelet.
	I0601 04:31:33.916408   14580 kubeadm.go:572] duration metric: took 6.043269745s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 04:31:33.916426   14580 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:31:34.101016   14580 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:31:34.101029   14580 node_conditions.go:123] node cpu capacity is 6
	I0601 04:31:34.101041   14580 node_conditions.go:105] duration metric: took 184.609149ms to run NodePressure ...
	I0601 04:31:34.101051   14580 start.go:213] waiting for startup goroutines ...
	I0601 04:31:34.134136   14580 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 04:31:34.156421   14580 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220601042455-2342" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:26:01 UTC, end at Wed 2022-06-01 11:32:31 UTC. --
	Jun 01 11:30:44 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:30:44.674847998Z" level=info msg="ignoring event" container=7be96b1a71ca0b1fdaf439b0c930009d83291f45c70007f146a065b57fb040ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:30:44 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:30:44.839047843Z" level=info msg="ignoring event" container=a25919e289513d34b3d48349a0339335d178fdd613a17880ecd402f4b71bf545 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:30:54 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:30:54.912696249Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=e970e13285ef168f51ccd7fdce04102d77a9d0bf72d5b67256846621b8cd3c72
	Jun 01 11:30:54 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:30:54.970571486Z" level=info msg="ignoring event" container=e970e13285ef168f51ccd7fdce04102d77a9d0bf72d5b67256846621b8cd3c72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:30:55 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:30:55.077328142Z" level=info msg="ignoring event" container=bc0a031e9a8b873924db170fc5504e7226c67e6c498a6b1b7ebf6baa8ce7ed5a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:05 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:05.167958835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=efbfded29c7c85527125ddc6fa14baf8b1b350b8587296ebd811d04fcb467eec
	Jun 01 11:31:05 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:05.196232362Z" level=info msg="ignoring event" container=efbfded29c7c85527125ddc6fa14baf8b1b350b8587296ebd811d04fcb467eec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:05 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:05.292553486Z" level=info msg="ignoring event" container=d1bcb871362abcabdac28a513d9e259519127ce00a6976cc6dd36416b7e923e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:05 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:05.396669666Z" level=info msg="ignoring event" container=b505e8fe19d4020fb99a869230aae36a07b8e7e85e73758bc3268d673e6e22c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:05 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:05.501940633Z" level=info msg="ignoring event" container=6c6ef83e3aeb282d956159a08b9759bbf20155229f4ecfb127a53757be2dc427 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:05 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:05.647855928Z" level=info msg="ignoring event" container=d085de832bf29711716070afc2653e953dcefeecc6aa4206d2412f845c6a4387 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:30 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:30.879039995Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:31:30 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:30.879086766Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:31:30 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:30.881837621Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:31:32 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:32.024185573Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 11:31:32 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:32.277565690Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 11:31:35 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:35.780566009Z" level=info msg="ignoring event" container=6a3a318fa62b3282dc86d3c8bda6f96dbef15638b15b1f378f3b75d30a325033 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:35 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:35.809691517Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	Jun 01 11:31:36 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:36.060887471Z" level=info msg="ignoring event" container=60dbc27c19faeee48af2d41bc8eca6fcecd819e5451bab70c535dcd0c115f59d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:39 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:39.625762405Z" level=info msg="ignoring event" container=f6ac4e004dce82992799c84e45437423e817f05464e916e474ae2e4c949a07e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:39 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:39.836572061Z" level=info msg="ignoring event" container=496609584bea71bdc46f4e36bf82bf974e4c702cf8b9983ff4958d3ca4289de2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:46 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:46.721740419Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:31:46 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:46.721838876Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:31:46 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:46.723296922Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:31:51 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:51.851221716Z" level=info msg="ignoring event" container=9d4f92bb5f43f9f1d14b6ac3f8eef771d3fec5755606ef9ad1b3148da890392a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	9d4f92bb5f43f       a90209bb39e3d                                                                                    40 seconds ago       Exited              dashboard-metrics-scraper   2                   068b18d47931a
	d7e8a986ad3fe       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   51 seconds ago       Running             kubernetes-dashboard        0                   ced4b04531433
	09117d0ae0022       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   747c706548598
	f0d66b891e748       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   6ace882d88d40
	297e6de0e635a       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   5b12d131f066f
	96a446a85e3e4       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   bb23663b4145d
	ae8b657759ba5       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   eb0b0ede8c705
	195f862cba9a8       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   bd709f91f83c7
	980bfd0c53394       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   9e591edac9643
	
	* 
	* ==> coredns [f0d66b891e74] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220601042455-2342
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220601042455-2342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=default-k8s-different-port-20220601042455-2342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T04_31_14_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:31:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220601042455-2342
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:32:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:32:29 +0000   Wed, 01 Jun 2022 11:31:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:32:29 +0000   Wed, 01 Jun 2022 11:31:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:32:29 +0000   Wed, 01 Jun 2022 11:31:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 11:32:29 +0000   Wed, 01 Jun 2022 11:32:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220601042455-2342
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                c5fbb63d-9472-4980-961d-f3d3881cf336
	  Boot ID:                    f65ff030-0ce1-451f-b056-a175624cc17c
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-cb9n8                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     65s
	  kube-system                 etcd-default-k8s-different-port-20220601042455-2342                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220601042455-2342             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220601042455-2342    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-proxy-p7tsj                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220601042455-2342             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 metrics-server-b955d9d8-vqpwl                                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         63s
	  kube-system                 storage-provisioner                                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-4rh9k                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-vsgbf                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 64s   kube-proxy  
	  Normal  NodeHasSufficientMemory  78s   kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s   kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s   kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 78s   kubelet     Starting kubelet.
	  Normal  NodeReady                67s   kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeReady
	  Normal  Starting                 3s    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s    kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s    kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s    kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [980bfd0c5339] <==
	* {"level":"info","ts":"2022-06-01T11:31:09.039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-01T11:31:09.039Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-01T11:31:09.042Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:31:09.042Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:31:09.042Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:31:09.042Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:31:09.042Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:31:09.533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:31:09.533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:31:09.533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:31:09.533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:31:09.533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:31:09.534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:31:09.534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:31:09.534Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220601042455-2342 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:31:09.534Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:31:09.534Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:32:32 up  1:13,  0 users,  load average: 0.59, 0.52, 0.70
	Linux default-k8s-different-port-20220601042455-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [195f862cba9a] <==
	* I0601 11:31:12.948082       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:31:12.971487       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:31:13.045014       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:31:13.048601       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0601 11:31:13.049226       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:31:13.052401       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:31:13.825790       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:31:14.338549       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:31:14.344547       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:31:14.354424       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:31:14.542398       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:31:27.411587       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:31:27.560531       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:31:28.051465       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:31:29.717904       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.106.246.152]
	W0601 11:31:30.422805       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:31:30.422894       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:31:30.422900       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:31:30.634323       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.155.50]
	I0601 11:31:30.715175       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.108.27.40]
	W0601 11:32:30.381671       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:32:30.381730       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:32:30.381738       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [96a446a85e3e] <==
	* I0601 11:31:29.425333       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0601 11:31:29.495532       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0601 11:31:29.512593       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-vqpwl"
	I0601 11:31:30.441455       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0601 11:31:30.448620       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:31:30.492224       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:31:30.496176       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:31:30.496340       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:31:30.538112       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	E0601 11:31:30.539699       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:31:30.539784       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:31:30.547810       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:31:30.553977       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:31:30.559633       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:31:30.559784       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:31:30.565625       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:31:30.565625       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:31:30.565624       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:31:30.565748       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:31:30.595694       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:31:30.595745       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:31:30.611290       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-4rh9k"
	I0601 11:31:30.699165       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-vsgbf"
	E0601 11:32:28.878861       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:32:28.888431       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [297e6de0e635] <==
	* I0601 11:31:28.003544       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:31:28.003686       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:31:28.003732       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:31:28.039405       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:31:28.039511       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:31:28.039526       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:31:28.039545       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:31:28.040676       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:31:28.041626       1 config.go:317] "Starting service config controller"
	I0601 11:31:28.041642       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:31:28.041680       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:31:28.041685       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:31:28.141921       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:31:28.141933       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [ae8b657759ba] <==
	* W0601 11:31:11.724943       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:31:11.725033       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:31:11.725410       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:31:11.725440       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:31:11.725653       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:31:11.725702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:31:11.726059       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:31:11.726104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:31:11.726306       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:31:11.726337       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:31:11.726516       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:31:11.726546       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:31:11.726705       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 11:31:11.726737       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:31:12.588733       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:31:12.588793       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:31:12.627410       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:31:12.627508       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 11:31:12.726171       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:31:12.726214       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:31:12.777877       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:31:12.777894       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:31:12.882386       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:31:12.882426       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:31:15.993625       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:26:01 UTC, end at Wed 2022-06-01 11:32:33 UTC. --
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.453745    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmd6n\" (UniqueName: \"kubernetes.io/projected/5e30b028-d8e4-4995-a03e-f3039f2e629a-kube-api-access-vmd6n\") pod \"kubernetes-dashboard-8469778f77-vsgbf\" (UID: \"5e30b028-d8e4-4995-a03e-f3039f2e629a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-vsgbf"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.453890    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a00e2b2-3357-4d45-812e-b96583883072-kube-proxy\") pod \"kube-proxy-p7tsj\" (UID: \"4a00e2b2-3357-4d45-812e-b96583883072\") " pod="kube-system/kube-proxy-p7tsj"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.453920    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/53aca426-4c43-4abd-bbb9-ca59d11ca961-tmp-dir\") pod \"metrics-server-b955d9d8-vqpwl\" (UID: \"53aca426-4c43-4abd-bbb9-ca59d11ca961\") " pod="kube-system/metrics-server-b955d9d8-vqpwl"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.453938    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/eb46d9b1-266a-406d-bfa9-384a28696367-tmp\") pod \"storage-provisioner\" (UID: \"eb46d9b1-266a-406d-bfa9-384a28696367\") " pod="kube-system/storage-provisioner"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.453956    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ed136c46-f2f0-412c-9b65-f56260bc72b0-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-4rh9k\" (UID: \"ed136c46-f2f0-412c-9b65-f56260bc72b0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-4rh9k"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.453975    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b71bc2a-d0ac-4d4d-9420-1422f088b267-config-volume\") pod \"coredns-64897985d-cb9n8\" (UID: \"0b71bc2a-d0ac-4d4d-9420-1422f088b267\") " pod="kube-system/coredns-64897985d-cb9n8"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.453992    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpfsp\" (UniqueName: \"kubernetes.io/projected/0b71bc2a-d0ac-4d4d-9420-1422f088b267-kube-api-access-jpfsp\") pod \"coredns-64897985d-cb9n8\" (UID: \"0b71bc2a-d0ac-4d4d-9420-1422f088b267\") " pod="kube-system/coredns-64897985d-cb9n8"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454008    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a00e2b2-3357-4d45-812e-b96583883072-lib-modules\") pod \"kube-proxy-p7tsj\" (UID: \"4a00e2b2-3357-4d45-812e-b96583883072\") " pod="kube-system/kube-proxy-p7tsj"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454029    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf58f\" (UniqueName: \"kubernetes.io/projected/eb46d9b1-266a-406d-bfa9-384a28696367-kube-api-access-mf58f\") pod \"storage-provisioner\" (UID: \"eb46d9b1-266a-406d-bfa9-384a28696367\") " pod="kube-system/storage-provisioner"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454090    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a00e2b2-3357-4d45-812e-b96583883072-xtables-lock\") pod \"kube-proxy-p7tsj\" (UID: \"4a00e2b2-3357-4d45-812e-b96583883072\") " pod="kube-system/kube-proxy-p7tsj"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454225    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvr7t\" (UniqueName: \"kubernetes.io/projected/ed136c46-f2f0-412c-9b65-f56260bc72b0-kube-api-access-rvr7t\") pod \"dashboard-metrics-scraper-56974995fc-4rh9k\" (UID: \"ed136c46-f2f0-412c-9b65-f56260bc72b0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-4rh9k"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454419    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5e30b028-d8e4-4995-a03e-f3039f2e629a-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-vsgbf\" (UID: \"5e30b028-d8e4-4995-a03e-f3039f2e629a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-vsgbf"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454565    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkcc9\" (UniqueName: \"kubernetes.io/projected/4a00e2b2-3357-4d45-812e-b96583883072-kube-api-access-rkcc9\") pod \"kube-proxy-p7tsj\" (UID: \"4a00e2b2-3357-4d45-812e-b96583883072\") " pod="kube-system/kube-proxy-p7tsj"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454655    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjr68\" (UniqueName: \"kubernetes.io/projected/53aca426-4c43-4abd-bbb9-ca59d11ca961-kube-api-access-cjr68\") pod \"metrics-server-b955d9d8-vqpwl\" (UID: \"53aca426-4c43-4abd-bbb9-ca59d11ca961\") " pod="kube-system/metrics-server-b955d9d8-vqpwl"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454758    7278 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 11:32:31 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:31.611768    7278 request.go:665] Waited for 1.148365663s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Jun 01 11:32:31 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:31.616646    7278 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220601042455-2342\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220601042455-2342"
	Jun 01 11:32:31 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:31.872628    7278 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220601042455-2342\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220601042455-2342"
	Jun 01 11:32:32 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:32.055336    7278 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220601042455-2342\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220601042455-2342"
	Jun 01 11:32:32 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:32.290787    7278 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220601042455-2342\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220601042455-2342"
	Jun 01 11:32:32 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:32.820407    7278 scope.go:110] "RemoveContainer" containerID="9d4f92bb5f43f9f1d14b6ac3f8eef771d3fec5755606ef9ad1b3148da890392a"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:33.183153    7278 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:33.183203    7278 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:33.183352    7278 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-cjr68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Probe
Handler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},
TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-vqpwl_kube-system(53aca426-4c43-4abd-bbb9-ca59d11ca961): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:33.183416    7278 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-vqpwl" podUID=53aca426-4c43-4abd-bbb9-ca59d11ca961
	
	* 
	* ==> kubernetes-dashboard [d7e8a986ad3f] <==
	* 2022/06/01 11:31:40 Using namespace: kubernetes-dashboard
	2022/06/01 11:31:40 Using in-cluster config to connect to apiserver
	2022/06/01 11:31:40 Using secret token for csrf signing
	2022/06/01 11:31:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/01 11:31:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/01 11:31:40 Successful initial request to the apiserver, version: v1.23.6
	2022/06/01 11:31:40 Generating JWE encryption key
	2022/06/01 11:31:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/01 11:31:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/01 11:31:40 Initializing JWE encryption key from synchronized object
	2022/06/01 11:31:40 Creating in-cluster Sidecar client
	2022/06/01 11:31:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 11:31:40 Serving insecurely on HTTP port: 9090
	2022/06/01 11:32:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 11:31:40 Starting overwatch
	
	* 
	* ==> storage-provisioner [09117d0ae002] <==
	* I0601 11:31:30.971940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 11:31:30.978879       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 11:31:30.978926       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 11:31:30.984255       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 11:31:30.984318       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"deefa71f-7253-48a1-9c18-c3eb9316f0b9", APIVersion:"v1", ResourceVersion:"566", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220601042455-2342_bbdcfe2c-8666-4c23-b99d-94ac01b710f8 became leader
	I0601 11:31:30.984451       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220601042455-2342_bbdcfe2c-8666-4c23-b99d-94ac01b710f8!
	I0601 11:31:31.085434       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220601042455-2342_bbdcfe2c-8666-4c23-b99d-94ac01b710f8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220601042455-2342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-vqpwl
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220601042455-2342 describe pod metrics-server-b955d9d8-vqpwl
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601042455-2342 describe pod metrics-server-b955d9d8-vqpwl: exit status 1 (275.191113ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-vqpwl" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220601042455-2342 describe pod metrics-server-b955d9d8-vqpwl: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601042455-2342
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220601042455-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40099052bd124fb15fc74243debff5ad9de0413925ae8cccd043e504ccfc09b5",
	        "Created": "2022-06-01T11:25:02.103923004Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251334,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:26:01.823416021Z",
	            "FinishedAt": "2022-06-01T11:25:59.819385515Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/40099052bd124fb15fc74243debff5ad9de0413925ae8cccd043e504ccfc09b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40099052bd124fb15fc74243debff5ad9de0413925ae8cccd043e504ccfc09b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/40099052bd124fb15fc74243debff5ad9de0413925ae8cccd043e504ccfc09b5/hosts",
	        "LogPath": "/var/lib/docker/containers/40099052bd124fb15fc74243debff5ad9de0413925ae8cccd043e504ccfc09b5/40099052bd124fb15fc74243debff5ad9de0413925ae8cccd043e504ccfc09b5-json.log",
	        "Name": "/default-k8s-different-port-20220601042455-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220601042455-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220601042455-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b349ab858f245919ce51dffccff79d5e5c946cdb6c4a63e21c99311da4d8c9ec-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b349ab858f245919ce51dffccff79d5e5c946cdb6c4a63e21c99311da4d8c9ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b349ab858f245919ce51dffccff79d5e5c946cdb6c4a63e21c99311da4d8c9ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b349ab858f245919ce51dffccff79d5e5c946cdb6c4a63e21c99311da4d8c9ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220601042455-2342",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220601042455-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220601042455-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220601042455-2342",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220601042455-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4be6775a93b8621d58014a5ecfab0be854567a196359d6edc6c06b5055706665",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54219"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54220"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54221"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54222"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54223"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4be6775a93b8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220601042455-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "40099052bd12",
	                        "default-k8s-different-port-20220601042455-2342"
	                    ],
	                    "NetworkID": "5f3c12cd19ff58d51e01757fec6b82d20b68f4cb21bd8ce16a3e44d0d6a0e4a2",
	                    "EndpointID": "5a96a10cec056260adf74fde055f6a475a03205be170a18adc34b9e8f7adce93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220601042455-2342 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220601042455-2342 logs -n 25: (2.71631992s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                | embed-certs-20220601040915-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                                |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601040915-2342                | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:16 PDT |
	|         | embed-certs-20220601040915-2342                   |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:16 PDT | 01 Jun 22 04:17 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:18 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601040844-2342               | old-k8s-version-20220601040844-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:22 PDT | 01 Jun 22 04:22 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:18 PDT | 01 Jun 22 04:23 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --memory=2200                                     |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                   |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | no-preload-20220601041659-2342                    | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | no-preload-20220601041659-2342                    | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                    |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:25 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:25 PDT | 01 Jun 22 04:25 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:25 PDT | 01 Jun 22 04:26 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:26 PDT | 01 Jun 22 04:26 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:26 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                |         |                |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:31 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:31 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601040844-2342               | old-k8s-version-20220601040844-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | default-k8s-different-port-20220601042455-2342    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601042455-2342    | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:26:00
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:26:00.480154   14580 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:26:00.480367   14580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:26:00.480372   14580 out.go:309] Setting ErrFile to fd 2...
	I0601 04:26:00.480376   14580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:26:00.480472   14580 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:26:00.480724   14580 out.go:303] Setting JSON to false
	I0601 04:26:00.495972   14580 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5130,"bootTime":1654077630,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:26:00.496148   14580 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:26:00.518702   14580 out.go:177] * [default-k8s-different-port-20220601042455-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:26:00.540230   14580 notify.go:193] Checking for updates...
	I0601 04:26:00.562166   14580 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:26:00.584188   14580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:26:00.606131   14580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:26:00.628160   14580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:26:00.649242   14580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:26:00.671625   14580 config.go:178] Loaded profile config "default-k8s-different-port-20220601042455-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:26:00.672286   14580 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:26:00.744341   14580 docker.go:137] docker version: linux-20.10.14
	I0601 04:26:00.744470   14580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:26:00.876577   14580 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:26:00.816467951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:26:00.952426   14580 out.go:177] * Using the docker driver based on existing profile
	I0601 04:26:00.974167   14580 start.go:284] selected driver: docker
	I0601 04:26:00.974245   14580 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601042455-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220601042455-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:26:00.974396   14580 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:26:00.977629   14580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:26:01.109165   14580 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:26:01.046549265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:26:01.109394   14580 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 04:26:01.109422   14580 cni.go:95] Creating CNI manager for ""
	I0601 04:26:01.109436   14580 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:26:01.109476   14580 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601042455-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601042455-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:26:01.131366   14580 out.go:177] * Starting control plane node default-k8s-different-port-20220601042455-2342 in cluster default-k8s-different-port-20220601042455-2342
	I0601 04:26:01.152261   14580 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:26:01.174273   14580 out.go:177] * Pulling base image ...
	I0601 04:26:01.217266   14580 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:26:01.217322   14580 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:26:01.217364   14580 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 04:26:01.217394   14580 cache.go:57] Caching tarball of preloaded images
	I0601 04:26:01.217602   14580 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 04:26:01.217630   14580 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 04:26:01.218950   14580 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/config.json ...
	I0601 04:26:01.292480   14580 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:26:01.292500   14580 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:26:01.292539   14580 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:26:01.292604   14580 start.go:352] acquiring machines lock for default-k8s-different-port-20220601042455-2342: {Name:mk23c69651775934f6906af797d469ba81c716b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:26:01.292726   14580 start.go:356] acquired machines lock for "default-k8s-different-port-20220601042455-2342" in 86.12µs
	I0601 04:26:01.292762   14580 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:26:01.292771   14580 fix.go:55] fixHost starting: 
	I0601 04:26:01.293035   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:26:01.364849   14580 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601042455-2342: state=Stopped err=<nil>
	W0601 04:26:01.364882   14580 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:26:01.386819   14580 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601042455-2342" ...
	I0601 04:26:01.408496   14580 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601042455-2342
	I0601 04:26:01.820280   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:26:01.895506   14580 kic.go:416] container "default-k8s-different-port-20220601042455-2342" state is running.
	I0601 04:26:01.896465   14580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601042455-2342
	I0601 04:26:01.978243   14580 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/config.json ...
	I0601 04:26:01.978645   14580 machine.go:88] provisioning docker machine ...
	I0601 04:26:01.978666   14580 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601042455-2342"
	I0601 04:26:01.978721   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.057577   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:02.057774   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:02.057800   14580 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601042455-2342 && echo "default-k8s-different-port-20220601042455-2342" | sudo tee /etc/hostname
	I0601 04:26:02.186984   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601042455-2342
	
	I0601 04:26:02.187091   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.266807   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:02.267070   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:02.267086   14580 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601042455-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601042455-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601042455-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:26:02.391579   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:26:02.391598   14580 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:26:02.391622   14580 ubuntu.go:177] setting up certificates
	I0601 04:26:02.391631   14580 provision.go:83] configureAuth start
	I0601 04:26:02.391694   14580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.468272   14580 provision.go:138] copyHostCerts
	I0601 04:26:02.468364   14580 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:26:02.468374   14580 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:26:02.468462   14580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:26:02.468675   14580 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:26:02.468685   14580 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:26:02.468744   14580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:26:02.468926   14580 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:26:02.468932   14580 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:26:02.468992   14580 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:26:02.469123   14580 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601042455-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601042455-2342]
	I0601 04:26:02.628033   14580 provision.go:172] copyRemoteCerts
	I0601 04:26:02.628108   14580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:26:02.628154   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.702757   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:02.788602   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 04:26:02.808975   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:26:02.827761   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0601 04:26:02.845233   14580 provision.go:86] duration metric: configureAuth took 453.583438ms
	I0601 04:26:02.845253   14580 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:26:02.845415   14580 config.go:178] Loaded profile config "default-k8s-different-port-20220601042455-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:26:02.845498   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:02.918198   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:02.918337   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:02.918348   14580 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:26:03.037204   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:26:03.037224   14580 ubuntu.go:71] root file system type: overlay
	I0601 04:26:03.037352   14580 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:26:03.037443   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.111170   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:03.111313   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:03.111366   14580 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:26:03.240246   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:26:03.240328   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.313142   14580 main.go:134] libmachine: Using SSH client type: native
	I0601 04:26:03.313309   14580 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 54219 <nil> <nil>}
	I0601 04:26:03.313322   14580 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:26:03.436245   14580 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:26:03.436261   14580 machine.go:91] provisioned docker machine in 1.457588188s
	I0601 04:26:03.436271   14580 start.go:306] post-start starting for "default-k8s-different-port-20220601042455-2342" (driver="docker")
	I0601 04:26:03.436277   14580 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:26:03.436331   14580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:26:03.436382   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.508767   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:03.596983   14580 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:26:03.600488   14580 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:26:03.600504   14580 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:26:03.600511   14580 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:26:03.600516   14580 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:26:03.600524   14580 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:26:03.600622   14580 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:26:03.600753   14580 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:26:03.600906   14580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:26:03.607953   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:26:03.625477   14580 start.go:309] post-start completed in 189.194706ms
	I0601 04:26:03.625551   14580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:26:03.625594   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.698770   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:03.782184   14580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:26:03.787963   14580 fix.go:57] fixHost completed within 2.49516055s
	I0601 04:26:03.787975   14580 start.go:81] releasing machines lock for "default-k8s-different-port-20220601042455-2342", held for 2.495210006s
	I0601 04:26:03.788048   14580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.861755   14580 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:26:03.861770   14580 ssh_runner.go:195] Run: systemctl --version
	I0601 04:26:03.861826   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.861844   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:03.941385   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:03.943609   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:26:04.159115   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:26:04.170967   14580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:26:04.180530   14580 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:26:04.180584   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:26:04.190099   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:26:04.203627   14580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:26:04.276680   14580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:26:04.346367   14580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:26:04.356550   14580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:26:04.429345   14580 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:26:04.438999   14580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:26:04.474032   14580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:26:04.561227   14580 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 04:26:04.561347   14580 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220601042455-2342 dig +short host.docker.internal
	I0601 04:26:04.700636   14580 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:26:04.700876   14580 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:26:04.705714   14580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:26:04.715234   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:04.787381   14580 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:26:04.787444   14580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:26:04.820611   14580 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 04:26:04.820628   14580 docker.go:541] Images already preloaded, skipping extraction
	I0601 04:26:04.820702   14580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:26:04.851433   14580 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 04:26:04.851452   14580 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:26:04.851529   14580 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:26:04.924277   14580 cni.go:95] Creating CNI manager for ""
	I0601 04:26:04.924289   14580 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:26:04.924340   14580 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 04:26:04.924355   14580 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601042455-2342 NodeName:default-k8s-different-port-20220601042455-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 Cgroup
Driver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:26:04.924465   14580 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20220601042455-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:26:04.924586   14580 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20220601042455-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601042455-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 04:26:04.924655   14580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 04:26:04.933113   14580 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:26:04.933164   14580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:26:04.939996   14580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0601 04:26:04.953811   14580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:26:04.966780   14580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0601 04:26:04.979301   14580 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:26:04.983155   14580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:26:04.993190   14580 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342 for IP: 192.168.49.2
	I0601 04:26:04.993351   14580 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:26:04.993405   14580 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:26:04.994010   14580 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.key
	I0601 04:26:04.994228   14580 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/apiserver.key.dd3b5fb2
	I0601 04:26:04.994339   14580 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/proxy-client.key
	I0601 04:26:04.994792   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:26:04.994838   14580 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:26:04.994852   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:26:04.994897   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:26:04.994933   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:26:04.994966   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:26:04.995036   14580 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:26:04.995574   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:26:05.012976   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 04:26:05.029562   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:26:05.046161   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 04:26:05.064012   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:26:05.081128   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:26:05.098110   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:26:05.116584   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:26:05.134436   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:26:05.152377   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:26:05.170599   14580 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:26:05.187918   14580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:26:05.201055   14580 ssh_runner.go:195] Run: openssl version
	I0601 04:26:05.206392   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:26:05.214051   14580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:26:05.217767   14580 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:26:05.217818   14580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:26:05.222918   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:26:05.230623   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:26:05.238341   14580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:26:05.242884   14580 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:26:05.242932   14580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:26:05.248585   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:26:05.257319   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:26:05.266343   14580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:26:05.270643   14580 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:26:05.270700   14580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:26:05.276959   14580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:26:05.288008   14580 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601042455-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601042455-2342
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:26:05.288113   14580 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:26:05.322070   14580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:26:05.329648   14580 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:26:05.329669   14580 kubeadm.go:626] restartCluster start
	I0601 04:26:05.329722   14580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:26:05.336276   14580 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:05.336354   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:26:05.410961   14580 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601042455-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:26:05.411150   14580 kubeconfig.go:127] "default-k8s-different-port-20220601042455-2342" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 04:26:05.411483   14580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:26:05.412859   14580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:26:05.420833   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:05.420896   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:05.429500   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:05.631665   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:05.631833   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:05.643018   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:05.831688   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:05.831898   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:05.843153   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.030469   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.030567   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.040665   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.231798   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.231890   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.243084   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.431676   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.431826   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.443194   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.631700   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.631935   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.642686   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:06.830012   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:06.830094   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:06.839242   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.029660   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.029824   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.040634   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.231729   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.231868   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.241891   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.431635   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.431790   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.442906   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.631739   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.631876   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.642010   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:07.831673   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:07.831875   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:07.842287   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.031773   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:08.031867   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:08.042244   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.230107   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:08.230278   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:08.240940   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.431826   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:08.431938   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:08.442260   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.442271   14580 api_server.go:165] Checking apiserver status ...
	I0601 04:26:08.442320   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:26:08.450312   14580 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.450323   14580 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 04:26:08.450330   14580 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:26:08.450388   14580 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:26:08.480503   14580 docker.go:442] Stopping containers: [65d7be1a2882 048b1bdbb6c2 2c25ac3039ad 125e0a096cf4 ab5ecc73c373 2c18d790047c 929c1f424661 dabba0ff7c28 796713528a3d 545f113ce692 86e7f6f4c99d ee398f9c81ed a9ae0036438b f295a496a4ff 35bded318b85]
	I0601 04:26:08.480580   14580 ssh_runner.go:195] Run: docker stop 65d7be1a2882 048b1bdbb6c2 2c25ac3039ad 125e0a096cf4 ab5ecc73c373 2c18d790047c 929c1f424661 dabba0ff7c28 796713528a3d 545f113ce692 86e7f6f4c99d ee398f9c81ed a9ae0036438b f295a496a4ff 35bded318b85
	I0601 04:26:08.511573   14580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:26:08.521553   14580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:26:08.529129   14580 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  1 11:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jun  1 11:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  1 11:25 /etc/kubernetes/scheduler.conf
	
	I0601 04:26:08.529185   14580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 04:26:08.536247   14580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 04:26:08.543227   14580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 04:26:08.550190   14580 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.550240   14580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 04:26:08.556997   14580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 04:26:08.563897   14580 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:26:08.563944   14580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 04:26:08.570777   14580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:26:08.578228   14580 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:26:08.578236   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:08.622444   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:09.237802   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:09.364391   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:09.414114   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:09.459937   14580 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:26:09.459999   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:26:09.970965   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:26:10.470355   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:26:10.972317   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:26:11.018801   14580 api_server.go:71] duration metric: took 1.55884453s to wait for apiserver process to appear ...
	I0601 04:26:11.018822   14580 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:26:11.018837   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:26:13.577321   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 04:26:13.577342   14580 api_server.go:102] status: https://127.0.0.1:54223/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 04:26:14.079514   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:26:14.087324   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:26:14.087344   14580 api_server.go:102] status: https://127.0.0.1:54223/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:26:14.577516   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:26:14.584143   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:26:14.584160   14580 api_server.go:102] status: https://127.0.0.1:54223/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:26:15.078134   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:26:15.084601   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 200:
	ok
	I0601 04:26:15.090735   14580 api_server.go:140] control plane version: v1.23.6
	I0601 04:26:15.090746   14580 api_server.go:130] duration metric: took 4.071866633s to wait for apiserver health ...
	I0601 04:26:15.090751   14580 cni.go:95] Creating CNI manager for ""
	I0601 04:26:15.090756   14580 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:26:15.090765   14580 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:26:15.097741   14580 system_pods.go:59] 8 kube-system pods found
	I0601 04:26:15.097757   14580 system_pods.go:61] "coredns-64897985d-2cwbz" [f2ee505c-7abb-468c-b82f-0639d95d3f54] Running
	I0601 04:26:15.097764   14580 system_pods.go:61] "etcd-default-k8s-different-port-20220601042455-2342" [b259b886-9d8d-48c7-aa2a-65478e01fab5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 04:26:15.097771   14580 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601042455-2342" [34bbd902-3352-4e4b-b54d-d825aa11c98a] Running
	I0601 04:26:15.097777   14580 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601042455-2342" [efd80c45-ac3d-4e6f-81fd-e7bb51b9cffa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 04:26:15.097781   14580 system_pods.go:61] "kube-proxy-5psvf" [3d2253f1-8b8f-4db0-8081-ca96df760f01] Running
	I0601 04:26:15.097787   14580 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601042455-2342" [18d03a0a-c279-4519-aff4-0601818b2b0f] Running
	I0601 04:26:15.097792   14580 system_pods.go:61] "metrics-server-b955d9d8-cb68n" [7969f4c9-b7b6-4268-bbeb-e853689361f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:26:15.097796   14580 system_pods.go:61] "storage-provisioner" [0da4c653-9101-4891-85e8-a014384c87d8] Running
	I0601 04:26:15.097800   14580 system_pods.go:74] duration metric: took 7.031251ms to wait for pod list to return data ...
	I0601 04:26:15.097806   14580 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:26:15.100523   14580 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:26:15.100537   14580 node_conditions.go:123] node cpu capacity is 6
	I0601 04:26:15.100549   14580 node_conditions.go:105] duration metric: took 2.73238ms to run NodePressure ...
	I0601 04:26:15.100560   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:26:15.225479   14580 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 04:26:15.230353   14580 kubeadm.go:777] kubelet initialised
	I0601 04:26:15.230363   14580 kubeadm.go:778] duration metric: took 4.871582ms waiting for restarted kubelet to initialise ...
	I0601 04:26:15.230371   14580 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:26:15.235739   14580 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-2cwbz" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:15.240544   14580 pod_ready.go:92] pod "coredns-64897985d-2cwbz" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:15.240553   14580 pod_ready.go:81] duration metric: took 4.800313ms waiting for pod "coredns-64897985d-2cwbz" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:15.240559   14580 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:17.252022   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:19.252400   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:21.252507   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:23.752885   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:25.754820   14580 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:27.752927   14580 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.752939   14580 pod_ready.go:81] duration metric: took 12.512215332s waiting for pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.752945   14580 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.758428   14580 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.758437   14580 pod_ready.go:81] duration metric: took 5.478741ms waiting for pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.758444   14580 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.763037   14580 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.763046   14580 pod_ready.go:81] duration metric: took 4.596913ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.763053   14580 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5psvf" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.767548   14580 pod_ready.go:92] pod "kube-proxy-5psvf" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.767557   14580 pod_ready.go:81] duration metric: took 4.499795ms waiting for pod "kube-proxy-5psvf" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.767564   14580 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.771963   14580 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:26:27.771972   14580 pod_ready.go:81] duration metric: took 4.403205ms waiting for pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:27.771978   14580 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace to be "Ready" ...
	I0601 04:26:30.160100   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:32.659198   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:35.158334   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:37.159528   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:39.160149   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:41.659068   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:44.157795   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:46.658518   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:49.159038   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:51.658963   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:54.158069   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:56.158969   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:26:58.659942   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:00.660463   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:03.160184   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:05.659156   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:08.160717   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:10.660116   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:12.660625   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:15.160199   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:17.162541   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:19.658919   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:21.660968   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:23.661128   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:26.160934   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:28.659300   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:30.659502   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:32.660480   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:35.156691   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:37.157033   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:39.157790   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:41.659953   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:44.157489   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:46.158320   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:48.158876   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:50.159391   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:52.160809   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:54.657873   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:56.658844   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:27:58.660860   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:01.160686   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:03.658576   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:05.660625   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:08.158369   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:10.159184   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:12.657760   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:14.659544   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:17.158299   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:19.159216   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:21.159644   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:23.659877   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:26.159865   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:28.161266   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:30.658249   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:32.659490   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:35.158008   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:37.160518   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:39.161152   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:41.660719   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:44.157806   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:46.159192   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:48.160558   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:50.661861   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:53.158863   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:55.159591   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:57.160005   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:28:59.660242   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:02.159089   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:04.163195   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:06.658567   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:08.661737   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:11.160153   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:13.659262   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:16.160500   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:18.659465   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:20.660803   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:22.661147   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:25.160932   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:27.659142   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:29.661942   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:32.158831   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:34.160363   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:36.162066   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:38.660230   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:40.660953   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:43.161689   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:45.660306   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:47.662797   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:50.161558   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:52.661866   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:55.162273   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:57.162318   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:29:59.663120   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:02.160469   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:04.161108   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:06.161957   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:08.662446   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:11.159732   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:13.161296   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:15.162016   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:17.663004   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:20.160275   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:22.162820   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:24.659704   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:26.659992   14580 pod_ready.go:102] pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace has status "Ready":"False"
	I0601 04:30:28.154244   14580 pod_ready.go:81] duration metric: took 4m0.379163703s waiting for pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace to be "Ready" ...
	E0601 04:30:28.154270   14580 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-cb68n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 04:30:28.154377   14580 pod_ready.go:38] duration metric: took 4m12.920745187s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:30:28.154419   14580 kubeadm.go:630] restartCluster took 4m22.821363871s
	W0601 04:30:28.154538   14580 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 04:30:28.154568   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 04:31:06.489649   14580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.334570745s)
	I0601 04:31:06.489708   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:31:06.500019   14580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:31:06.508704   14580 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 04:31:06.508749   14580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:31:06.516354   14580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 04:31:06.516381   14580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 04:31:07.022688   14580 out.go:204]   - Generating certificates and keys ...
	I0601 04:31:07.547628   14580 out.go:204]   - Booting up control plane ...
	I0601 04:31:14.098649   14580 out.go:204]   - Configuring RBAC rules ...
	I0601 04:31:14.472898   14580 cni.go:95] Creating CNI manager for ""
	I0601 04:31:14.472939   14580 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:31:14.472970   14580 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 04:31:14.473040   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92 minikube.k8s.io/name=default-k8s-different-port-20220601042455-2342 minikube.k8s.io/updated_at=2022_06_01T04_31_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:14.473054   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:14.609357   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:14.629440   14580 ops.go:34] apiserver oom_adj: -16
	I0601 04:31:15.302635   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:15.802069   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:16.301733   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:16.802238   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:17.302310   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:17.801852   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:18.301850   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:18.801983   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:19.301780   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:19.802161   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:20.301791   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:20.801992   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:21.302891   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:21.803055   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:22.301886   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:22.802271   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:23.303324   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:23.801846   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:24.302533   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:24.802000   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:25.302196   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:25.801882   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:26.302056   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:26.801937   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:27.301872   14580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 04:31:27.356443   14580 kubeadm.go:1045] duration metric: took 12.883292321s to wait for elevateKubeSystemPrivileges.
	I0601 04:31:27.356475   14580 kubeadm.go:397] StartCluster complete in 5m22.064331829s
	I0601 04:31:27.356499   14580 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:31:27.356584   14580 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:31:27.357174   14580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:31:27.873007   14580 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601042455-2342" rescaled to 1
	I0601 04:31:27.873045   14580 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:31:27.873075   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:31:27.916883   14580 out.go:177] * Verifying Kubernetes components...
	I0601 04:31:27.873103   14580 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 04:31:27.873265   14580 config.go:178] Loaded profile config "default-k8s-different-port-20220601042455-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:31:27.990179   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:31:27.990194   14580 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990191   14580 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990230   14580 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990256   14580 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990270   14580 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990292   14580 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:27.990297   14580 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220601042455-2342"
	W0601 04:31:27.990313   14580 addons.go:165] addon storage-provisioner should already be in state true
	W0601 04:31:27.990319   14580 addons.go:165] addon dashboard should already be in state true
	I0601 04:31:27.990289   14580 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220601042455-2342"
	W0601 04:31:27.990347   14580 addons.go:165] addon metrics-server should already be in state true
	I0601 04:31:27.990402   14580 host.go:66] Checking if "default-k8s-different-port-20220601042455-2342" exists ...
	I0601 04:31:27.990406   14580 host.go:66] Checking if "default-k8s-different-port-20220601042455-2342" exists ...
	I0601 04:31:27.990546   14580 host.go:66] Checking if "default-k8s-different-port-20220601042455-2342" exists ...
	I0601 04:31:27.991110   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:27.991161   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:27.991193   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:27.991205   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:28.005334   14580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 04:31:28.019407   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.119407   14580 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:28.160009   14580 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0601 04:31:28.160022   14580 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:31:28.139235   14580 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 04:31:28.160061   14580 host.go:66] Checking if "default-k8s-different-port-20220601042455-2342" exists ...
	I0601 04:31:28.181420   14580 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:31:28.182270   14580 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601042455-2342 --format={{.State.Status}}
	I0601 04:31:28.222903   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:31:28.202022   14580 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 04:31:28.213807   14580 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601042455-2342" to be "Ready" ...
	I0601 04:31:28.222906   14580 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 04:31:28.222991   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.244155   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 04:31:28.244317   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.265065   14580 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 04:31:28.248658   14580 node_ready.go:49] node "default-k8s-different-port-20220601042455-2342" has status "Ready":"True"
	I0601 04:31:28.286018   14580 node_ready.go:38] duration metric: took 41.900602ms waiting for node "default-k8s-different-port-20220601042455-2342" to be "Ready" ...
	I0601 04:31:28.286061   14580 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:31:28.286111   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 04:31:28.286143   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 04:31:28.286305   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.301339   14580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-8p4v4" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:28.323212   14580 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:31:28.323230   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:31:28.323329   14580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601042455-2342
	I0601 04:31:28.353482   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:31:28.378157   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:31:28.401554   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:31:28.425239   14580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601042455-2342/id_rsa Username:docker}
	I0601 04:31:28.501564   14580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:31:28.598323   14580 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 04:31:28.598337   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 04:31:28.606270   14580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:31:28.608473   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 04:31:28.608492   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 04:31:28.690725   14580 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 04:31:28.690745   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 04:31:28.702169   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 04:31:28.702196   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 04:31:28.793453   14580 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:31:28.793479   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 04:31:28.799106   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 04:31:28.799130   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 04:31:28.885461   14580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:31:28.897510   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 04:31:28.897524   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 04:31:28.918188   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 04:31:28.918205   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 04:31:29.002904   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 04:31:29.002919   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 04:31:29.117220   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 04:31:29.117235   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 04:31:29.201285   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 04:31:29.201302   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 04:31:29.287703   14580 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.282311049s)
	I0601 04:31:29.287729   14580 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 04:31:29.291304   14580 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:31:29.291322   14580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 04:31:29.392031   14580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:31:29.727831   14580 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220601042455-2342"
	I0601 04:31:29.886057   14580 pod_ready.go:92] pod "coredns-64897985d-8p4v4" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:29.886074   14580 pod_ready.go:81] duration metric: took 1.584692102s waiting for pod "coredns-64897985d-8p4v4" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:29.886087   14580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-cb9n8" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:30.725108   14580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.333022241s)
	I0601 04:31:30.806402   14580 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 04:31:30.843522   14580 addons.go:417] enableAddons completed in 2.970381543s
	I0601 04:31:31.905757   14580 pod_ready.go:102] pod "coredns-64897985d-cb9n8" in "kube-system" namespace has status "Ready":"False"
	I0601 04:31:32.905434   14580 pod_ready.go:92] pod "coredns-64897985d-cb9n8" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.905450   14580 pod_ready.go:81] duration metric: took 3.019316909s waiting for pod "coredns-64897985d-cb9n8" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.905457   14580 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.914545   14580 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.914568   14580 pod_ready.go:81] duration metric: took 9.084073ms waiting for pod "etcd-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.914583   14580 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.926766   14580 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.926777   14580 pod_ready.go:81] duration metric: took 12.185589ms waiting for pod "kube-apiserver-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.926785   14580 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.936236   14580 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.936249   14580 pod_ready.go:81] duration metric: took 9.458235ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.936261   14580 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p7tsj" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.982358   14580 pod_ready.go:92] pod "kube-proxy-p7tsj" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:32.982376   14580 pod_ready.go:81] duration metric: took 46.107821ms waiting for pod "kube-proxy-p7tsj" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:32.982388   14580 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:33.300851   14580 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace has status "Ready":"True"
	I0601 04:31:33.300861   14580 pod_ready.go:81] duration metric: took 318.462177ms waiting for pod "kube-scheduler-default-k8s-different-port-20220601042455-2342" in "kube-system" namespace to be "Ready" ...
	I0601 04:31:33.300867   14580 pod_ready.go:38] duration metric: took 5.014691974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 04:31:33.300883   14580 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:31:33.300930   14580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:31:33.312670   14580 api_server.go:71] duration metric: took 5.439538065s to wait for apiserver process to appear ...
	I0601 04:31:33.312684   14580 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:31:33.312690   14580 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54223/healthz ...
	I0601 04:31:33.318481   14580 api_server.go:266] https://127.0.0.1:54223/healthz returned 200:
	ok
	I0601 04:31:33.319652   14580 api_server.go:140] control plane version: v1.23.6
	I0601 04:31:33.319662   14580 api_server.go:130] duration metric: took 6.974325ms to wait for apiserver health ...
	I0601 04:31:33.319668   14580 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:31:33.503644   14580 system_pods.go:59] 9 kube-system pods found
	I0601 04:31:33.503658   14580 system_pods.go:61] "coredns-64897985d-8p4v4" [ae0cb737-4e73-40a0-b7ca-c5fb35908ad9] Running
	I0601 04:31:33.503664   14580 system_pods.go:61] "coredns-64897985d-cb9n8" [0b71bc2a-d0ac-4d4d-9420-1422f088b267] Running
	I0601 04:31:33.503672   14580 system_pods.go:61] "etcd-default-k8s-different-port-20220601042455-2342" [d64e3142-a5a3-438a-b1dd-f8fda41cf500] Running
	I0601 04:31:33.503684   14580 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601042455-2342" [e7ebee32-6122-4fd0-8e7a-26d16cf09fd5] Running
	I0601 04:31:33.503691   14580 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601042455-2342" [736247dc-e330-4d49-a9b4-38e9f4bf2f55] Running
	I0601 04:31:33.503697   14580 system_pods.go:61] "kube-proxy-p7tsj" [4a00e2b2-3357-4d45-812e-b96583883072] Running
	I0601 04:31:33.503708   14580 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601042455-2342" [547e2d90-4aa4-4ffa-8227-7a87069bc624] Running
	I0601 04:31:33.503718   14580 system_pods.go:61] "metrics-server-b955d9d8-vqpwl" [53aca426-4c43-4abd-bbb9-ca59d11ca961] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:31:33.503726   14580 system_pods.go:61] "storage-provisioner" [eb46d9b1-266a-406d-bfa9-384a28696367] Running
	I0601 04:31:33.503737   14580 system_pods.go:74] duration metric: took 184.060787ms to wait for pod list to return data ...
	I0601 04:31:33.503746   14580 default_sa.go:34] waiting for default service account to be created ...
	I0601 04:31:33.700368   14580 default_sa.go:45] found service account: "default"
	I0601 04:31:33.700381   14580 default_sa.go:55] duration metric: took 196.626716ms for default service account to be created ...
	I0601 04:31:33.700386   14580 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 04:31:33.904017   14580 system_pods.go:86] 9 kube-system pods found
	I0601 04:31:33.904032   14580 system_pods.go:89] "coredns-64897985d-8p4v4" [ae0cb737-4e73-40a0-b7ca-c5fb35908ad9] Running
	I0601 04:31:33.904036   14580 system_pods.go:89] "coredns-64897985d-cb9n8" [0b71bc2a-d0ac-4d4d-9420-1422f088b267] Running
	I0601 04:31:33.904040   14580 system_pods.go:89] "etcd-default-k8s-different-port-20220601042455-2342" [d64e3142-a5a3-438a-b1dd-f8fda41cf500] Running
	I0601 04:31:33.904050   14580 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220601042455-2342" [e7ebee32-6122-4fd0-8e7a-26d16cf09fd5] Running
	I0601 04:31:33.904056   14580 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220601042455-2342" [736247dc-e330-4d49-a9b4-38e9f4bf2f55] Running
	I0601 04:31:33.904060   14580 system_pods.go:89] "kube-proxy-p7tsj" [4a00e2b2-3357-4d45-812e-b96583883072] Running
	I0601 04:31:33.904064   14580 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220601042455-2342" [547e2d90-4aa4-4ffa-8227-7a87069bc624] Running
	I0601 04:31:33.904069   14580 system_pods.go:89] "metrics-server-b955d9d8-vqpwl" [53aca426-4c43-4abd-bbb9-ca59d11ca961] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:31:33.904073   14580 system_pods.go:89] "storage-provisioner" [eb46d9b1-266a-406d-bfa9-384a28696367] Running
	I0601 04:31:33.904079   14580 system_pods.go:126] duration metric: took 203.685319ms to wait for k8s-apps to be running ...
	I0601 04:31:33.904101   14580 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 04:31:33.904156   14580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:31:33.916392   14580 system_svc.go:56] duration metric: took 12.281443ms WaitForService to wait for kubelet.
	I0601 04:31:33.916408   14580 kubeadm.go:572] duration metric: took 6.043269745s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 04:31:33.916426   14580 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:31:34.101016   14580 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:31:34.101029   14580 node_conditions.go:123] node cpu capacity is 6
	I0601 04:31:34.101041   14580 node_conditions.go:105] duration metric: took 184.609149ms to run NodePressure ...
	I0601 04:31:34.101051   14580 start.go:213] waiting for startup goroutines ...
	I0601 04:31:34.134136   14580 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 04:31:34.156421   14580 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220601042455-2342" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:26:01 UTC, end at Wed 2022-06-01 11:32:36 UTC. --
	Jun 01 11:30:55 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:30:55.077328142Z" level=info msg="ignoring event" container=bc0a031e9a8b873924db170fc5504e7226c67e6c498a6b1b7ebf6baa8ce7ed5a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:05 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:05.167958835Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=efbfded29c7c85527125ddc6fa14baf8b1b350b8587296ebd811d04fcb467eec
	Jun 01 11:31:05 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:05.196232362Z" level=info msg="ignoring event" container=efbfded29c7c85527125ddc6fa14baf8b1b350b8587296ebd811d04fcb467eec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:05 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:05.292553486Z" level=info msg="ignoring event" container=d1bcb871362abcabdac28a513d9e259519127ce00a6976cc6dd36416b7e923e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:05 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:05.396669666Z" level=info msg="ignoring event" container=b505e8fe19d4020fb99a869230aae36a07b8e7e85e73758bc3268d673e6e22c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:05 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:05.501940633Z" level=info msg="ignoring event" container=6c6ef83e3aeb282d956159a08b9759bbf20155229f4ecfb127a53757be2dc427 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:05 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:05.647855928Z" level=info msg="ignoring event" container=d085de832bf29711716070afc2653e953dcefeecc6aa4206d2412f845c6a4387 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:30 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:30.879039995Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:31:30 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:30.879086766Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:31:30 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:30.881837621Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:31:32 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:32.024185573Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 11:31:32 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:32.277565690Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 11:31:35 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:35.780566009Z" level=info msg="ignoring event" container=6a3a318fa62b3282dc86d3c8bda6f96dbef15638b15b1f378f3b75d30a325033 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:35 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:35.809691517Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	Jun 01 11:31:36 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:36.060887471Z" level=info msg="ignoring event" container=60dbc27c19faeee48af2d41bc8eca6fcecd819e5451bab70c535dcd0c115f59d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:39 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:39.625762405Z" level=info msg="ignoring event" container=f6ac4e004dce82992799c84e45437423e817f05464e916e474ae2e4c949a07e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:39 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:39.836572061Z" level=info msg="ignoring event" container=496609584bea71bdc46f4e36bf82bf974e4c702cf8b9983ff4958d3ca4289de2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:31:46 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:46.721740419Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:31:46 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:46.721838876Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:31:46 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:46.723296922Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:31:51 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:31:51.851221716Z" level=info msg="ignoring event" container=9d4f92bb5f43f9f1d14b6ac3f8eef771d3fec5755606ef9ad1b3148da890392a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:32:33.180243702Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:32:33.180304945Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:32:33.182203602Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 dockerd[132]: time="2022-06-01T11:32:33.186141279Z" level=info msg="ignoring event" container=9ba16dc99a8f112d424ca5e2c0751997b75ef2d23eabeb7e8fe1cdec9fa4be37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	9ba16dc99a8f1       a90209bb39e3d                                                                                    4 seconds ago        Exited              dashboard-metrics-scraper   3                   068b18d47931a
	d7e8a986ad3fe       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   56 seconds ago       Running             kubernetes-dashboard        0                   ced4b04531433
	09117d0ae0022       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   747c706548598
	f0d66b891e748       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   6ace882d88d40
	297e6de0e635a       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   5b12d131f066f
	96a446a85e3e4       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   bb23663b4145d
	ae8b657759ba5       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   eb0b0ede8c705
	195f862cba9a8       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   bd709f91f83c7
	980bfd0c53394       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   9e591edac9643
	
	* 
	* ==> coredns [f0d66b891e74] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220601042455-2342
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220601042455-2342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=default-k8s-different-port-20220601042455-2342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T04_31_14_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:31:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220601042455-2342
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:32:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:32:29 +0000   Wed, 01 Jun 2022 11:31:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:32:29 +0000   Wed, 01 Jun 2022 11:31:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:32:29 +0000   Wed, 01 Jun 2022 11:31:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 11:32:29 +0000   Wed, 01 Jun 2022 11:32:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220601042455-2342
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                c5fbb63d-9472-4980-961d-f3d3881cf336
	  Boot ID:                    f65ff030-0ce1-451f-b056-a175624cc17c
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-cb9n8                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     69s
	  kube-system                 etcd-default-k8s-different-port-20220601042455-2342                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220601042455-2342             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220601042455-2342    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-p7tsj                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220601042455-2342             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 metrics-server-b955d9d8-vqpwl                                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         67s
	  kube-system                 storage-provisioner                                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-4rh9k                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-vsgbf                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 68s   kube-proxy  
	  Normal  NodeHasSufficientMemory  82s   kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s   kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s   kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 82s   kubelet     Starting kubelet.
	  Normal  NodeReady                71s   kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeReady
	  Normal  Starting                 7s    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s    kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s    kubelet     Node default-k8s-different-port-20220601042455-2342 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [980bfd0c5339] <==
	* {"level":"info","ts":"2022-06-01T11:31:09.039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-01T11:31:09.039Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-01T11:31:09.042Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:31:09.042Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:31:09.042Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:31:09.042Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:31:09.042Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:31:09.533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T11:31:09.533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:31:09.533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:31:09.533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:31:09.533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:31:09.534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:31:09.534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:31:09.534Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220601042455-2342 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:31:09.534Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:31:09.534Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:31:09.535Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:32:37 up  1:13,  0 users,  load average: 0.55, 0.51, 0.70
	Linux default-k8s-different-port-20220601042455-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [195f862cba9a] <==
	* I0601 11:31:12.948082       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:31:12.971487       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:31:13.045014       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 11:31:13.048601       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0601 11:31:13.049226       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:31:13.052401       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:31:13.825790       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:31:14.338549       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:31:14.344547       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 11:31:14.354424       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:31:14.542398       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:31:27.411587       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:31:27.560531       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:31:28.051465       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:31:29.717904       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.106.246.152]
	W0601 11:31:30.422805       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:31:30.422894       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:31:30.422900       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:31:30.634323       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.155.50]
	I0601 11:31:30.715175       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.108.27.40]
	W0601 11:32:30.381671       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:32:30.381730       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:32:30.381738       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [96a446a85e3e] <==
	* I0601 11:31:29.425333       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0601 11:31:29.495532       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0601 11:31:29.512593       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-vqpwl"
	I0601 11:31:30.441455       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0601 11:31:30.448620       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:31:30.492224       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:31:30.496176       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:31:30.496340       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:31:30.538112       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	E0601 11:31:30.539699       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:31:30.539784       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:31:30.547810       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:31:30.553977       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:31:30.559633       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:31:30.559784       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:31:30.565625       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 11:31:30.565625       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:31:30.565624       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:31:30.565748       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 11:31:30.595694       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 11:31:30.595745       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 11:31:30.611290       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-4rh9k"
	I0601 11:31:30.699165       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-vsgbf"
	E0601 11:32:28.878861       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 11:32:28.888431       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [297e6de0e635] <==
	* I0601 11:31:28.003544       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:31:28.003686       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:31:28.003732       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:31:28.039405       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:31:28.039511       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:31:28.039526       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:31:28.039545       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:31:28.040676       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:31:28.041626       1 config.go:317] "Starting service config controller"
	I0601 11:31:28.041642       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:31:28.041680       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:31:28.041685       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:31:28.141921       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:31:28.141933       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [ae8b657759ba] <==
	* W0601 11:31:11.724943       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:31:11.725033       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:31:11.725410       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:31:11.725440       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:31:11.725653       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:31:11.725702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 11:31:11.726059       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:31:11.726104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:31:11.726306       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:31:11.726337       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:31:11.726516       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:31:11.726546       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:31:11.726705       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 11:31:11.726737       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:31:12.588733       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:31:12.588793       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:31:12.627410       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:31:12.627508       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 11:31:12.726171       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:31:12.726214       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:31:12.777877       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:31:12.777894       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:31:12.882386       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:31:12.882426       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:31:15.993625       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:26:01 UTC, end at Wed 2022-06-01 11:32:37 UTC. --
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.453975    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b71bc2a-d0ac-4d4d-9420-1422f088b267-config-volume\") pod \"coredns-64897985d-cb9n8\" (UID: \"0b71bc2a-d0ac-4d4d-9420-1422f088b267\") " pod="kube-system/coredns-64897985d-cb9n8"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.453992    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpfsp\" (UniqueName: \"kubernetes.io/projected/0b71bc2a-d0ac-4d4d-9420-1422f088b267-kube-api-access-jpfsp\") pod \"coredns-64897985d-cb9n8\" (UID: \"0b71bc2a-d0ac-4d4d-9420-1422f088b267\") " pod="kube-system/coredns-64897985d-cb9n8"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454008    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a00e2b2-3357-4d45-812e-b96583883072-lib-modules\") pod \"kube-proxy-p7tsj\" (UID: \"4a00e2b2-3357-4d45-812e-b96583883072\") " pod="kube-system/kube-proxy-p7tsj"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454029    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf58f\" (UniqueName: \"kubernetes.io/projected/eb46d9b1-266a-406d-bfa9-384a28696367-kube-api-access-mf58f\") pod \"storage-provisioner\" (UID: \"eb46d9b1-266a-406d-bfa9-384a28696367\") " pod="kube-system/storage-provisioner"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454090    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a00e2b2-3357-4d45-812e-b96583883072-xtables-lock\") pod \"kube-proxy-p7tsj\" (UID: \"4a00e2b2-3357-4d45-812e-b96583883072\") " pod="kube-system/kube-proxy-p7tsj"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454225    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvr7t\" (UniqueName: \"kubernetes.io/projected/ed136c46-f2f0-412c-9b65-f56260bc72b0-kube-api-access-rvr7t\") pod \"dashboard-metrics-scraper-56974995fc-4rh9k\" (UID: \"ed136c46-f2f0-412c-9b65-f56260bc72b0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-4rh9k"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454419    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5e30b028-d8e4-4995-a03e-f3039f2e629a-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-vsgbf\" (UID: \"5e30b028-d8e4-4995-a03e-f3039f2e629a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-vsgbf"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454565    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkcc9\" (UniqueName: \"kubernetes.io/projected/4a00e2b2-3357-4d45-812e-b96583883072-kube-api-access-rkcc9\") pod \"kube-proxy-p7tsj\" (UID: \"4a00e2b2-3357-4d45-812e-b96583883072\") " pod="kube-system/kube-proxy-p7tsj"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454655    7278 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjr68\" (UniqueName: \"kubernetes.io/projected/53aca426-4c43-4abd-bbb9-ca59d11ca961-kube-api-access-cjr68\") pod \"metrics-server-b955d9d8-vqpwl\" (UID: \"53aca426-4c43-4abd-bbb9-ca59d11ca961\") " pod="kube-system/metrics-server-b955d9d8-vqpwl"
	Jun 01 11:32:30 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:30.454758    7278 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 11:32:31 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:31.611768    7278 request.go:665] Waited for 1.148365663s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Jun 01 11:32:31 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:31.616646    7278 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220601042455-2342\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220601042455-2342"
	Jun 01 11:32:31 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:31.872628    7278 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220601042455-2342\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220601042455-2342"
	Jun 01 11:32:32 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:32.055336    7278 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220601042455-2342\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220601042455-2342"
	Jun 01 11:32:32 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:32.290787    7278 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220601042455-2342\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220601042455-2342"
	Jun 01 11:32:32 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:32.820407    7278 scope.go:110] "RemoveContainer" containerID="9d4f92bb5f43f9f1d14b6ac3f8eef771d3fec5755606ef9ad1b3148da890392a"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:33.183153    7278 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:33.183203    7278 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:33.183352    7278 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-cjr68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Probe
Handler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},
TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-vqpwl_kube-system(53aca426-4c43-4abd-bbb9-ca59d11ca961): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:33.183416    7278 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-vqpwl" podUID=53aca426-4c43-4abd-bbb9-ca59d11ca961
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:33.479887    7278 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-4rh9k through plugin: invalid network status for"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:33.484757    7278 scope.go:110] "RemoveContainer" containerID="9d4f92bb5f43f9f1d14b6ac3f8eef771d3fec5755606ef9ad1b3148da890392a"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:33.487262    7278 scope.go:110] "RemoveContainer" containerID="9ba16dc99a8f112d424ca5e2c0751997b75ef2d23eabeb7e8fe1cdec9fa4be37"
	Jun 01 11:32:33 default-k8s-different-port-20220601042455-2342 kubelet[7278]: E0601 11:32:33.487862    7278 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-4rh9k_kubernetes-dashboard(ed136c46-f2f0-412c-9b65-f56260bc72b0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-4rh9k" podUID=ed136c46-f2f0-412c-9b65-f56260bc72b0
	Jun 01 11:32:34 default-k8s-different-port-20220601042455-2342 kubelet[7278]: I0601 11:32:34.489766    7278 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-4rh9k through plugin: invalid network status for"
	
	* 
	* ==> kubernetes-dashboard [d7e8a986ad3f] <==
	* 2022/06/01 11:31:40 Starting overwatch
	2022/06/01 11:31:40 Using namespace: kubernetes-dashboard
	2022/06/01 11:31:40 Using in-cluster config to connect to apiserver
	2022/06/01 11:31:40 Using secret token for csrf signing
	2022/06/01 11:31:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/01 11:31:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/01 11:31:40 Successful initial request to the apiserver, version: v1.23.6
	2022/06/01 11:31:40 Generating JWE encryption key
	2022/06/01 11:31:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/01 11:31:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/01 11:31:40 Initializing JWE encryption key from synchronized object
	2022/06/01 11:31:40 Creating in-cluster Sidecar client
	2022/06/01 11:31:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 11:31:40 Serving insecurely on HTTP port: 9090
	2022/06/01 11:32:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [09117d0ae002] <==
	* I0601 11:31:30.971940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 11:31:30.978879       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 11:31:30.978926       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 11:31:30.984255       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 11:31:30.984318       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"deefa71f-7253-48a1-9c18-c3eb9316f0b9", APIVersion:"v1", ResourceVersion:"566", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220601042455-2342_bbdcfe2c-8666-4c23-b99d-94ac01b710f8 became leader
	I0601 11:31:30.984451       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220601042455-2342_bbdcfe2c-8666-4c23-b99d-94ac01b710f8!
	I0601 11:31:31.085434       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220601042455-2342_bbdcfe2c-8666-4c23-b99d-94ac01b710f8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220601042455-2342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-vqpwl
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220601042455-2342 describe pod metrics-server-b955d9d8-vqpwl
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601042455-2342 describe pod metrics-server-b955d9d8-vqpwl: exit status 1 (289.225981ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-vqpwl" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220601042455-2342 describe pod metrics-server-b955d9d8-vqpwl: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (44.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (555.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:32:27.584736    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:32:51.720476    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:32:53.014549    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:33:12.187740    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 04:33:20.706139    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:35:14.710785    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:35:38.667445    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
E0601 04:35:38.673134    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
E0601 04:35:38.683270    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
E0601 04:35:38.703779    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
E0601 04:35:38.744034    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
E0601 04:35:38.826264    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
E0601 04:35:38.988485    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
E0601 04:35:39.310746    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
E0601 04:35:39.952991    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
E0601 04:35:41.233565    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
E0601 04:35:43.793856    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:35:48.916073    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
E0601 04:35:49.697513    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:35:59.156412    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:36:11.967779    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:36:19.638918    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:36:47.051534    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:36:49.141651    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:37:00.608511    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:37:12.773631    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:37:27.597412    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:37:40.711457    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:37:51.732695    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:37:53.026671    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601041659-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:38:22.531059    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 04:38:30.461960    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:52369/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 04:39:00.646034    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 04:39:31.686391    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 04:40:14.720955    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 04:40:38.678846    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 04:40:49.708446    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 04:41:06.375534    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601042455-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 04:41:11.979816    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:289: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 2 (448.220051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: status error: exit status 2 (may be ok)
start_stop_delete_test.go:289: "old-k8s-version-20220601040844-2342" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220601040844-2342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601040844-2342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.801µs)
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220601040844-2342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601040844-2342
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601040844-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef",
	        "Created": "2022-06-01T11:08:51.714948054Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 210556,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:14:29.397998414Z",
	            "FinishedAt": "2022-06-01T11:14:26.589423316Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/hosts",
	        "LogPath": "/var/lib/docker/containers/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef/91a44163d23550afa95e57092af54742c4a2e86b4336ccb0573c35ecc80094ef-json.log",
	        "Name": "/old-k8s-version-20220601040844-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601040844-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601040844-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/877002bf6efa6b43d3c16b0de02746f563ba9b189b8f34b7ec178fe6662a56a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601040844-2342",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601040844-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601040844-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601040844-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67742c0ebbdd1f76c16da912020c2ef1bdaa88cf6af0da25d66eaecd83c8f4d5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52365"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52366"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52367"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52368"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52369"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/67742c0ebbdd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601040844-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91a44163d235",
	                        "old-k8s-version-20220601040844-2342"
	                    ],
	                    "NetworkID": "19418e1daf902e10e91ecb0632ae46e6cbb8b43c0deeca829a591ae95b7f1e4b",
	                    "EndpointID": "f03c2fa8111d36ee41f3d8b53613ddd37aee00df9d89313a9d833d5735db5784",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 2 (428.480794ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220601040844-2342 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220601040844-2342 logs -n 25: (3.567053537s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:25 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:25 PDT | 01 Jun 22 04:25 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:25 PDT | 01 Jun 22 04:26 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:26 PDT | 01 Jun 22 04:26 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:26 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:31 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:31 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601040844-2342                        | old-k8s-version-20220601040844-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601042455-2342             | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601042455-2342             | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601043243-2342 --memory=2200            | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:33 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601043243-2342 --memory=2200            | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:34 PDT | 01 Jun 22 04:34 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220601043243-2342                             | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:34 PDT | 01 Jun 22 04:34 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220601043243-2342                             | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:34 PDT | 01 Jun 22 04:34 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:34 PDT | 01 Jun 22 04:34 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:34 PDT | 01 Jun 22 04:34 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:33:35
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:33:35.720618   15168 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:33:35.720870   15168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:33:35.720880   15168 out.go:309] Setting ErrFile to fd 2...
	I0601 04:33:35.720890   15168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:33:35.721020   15168 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:33:35.721322   15168 out.go:303] Setting JSON to false
	I0601 04:33:35.737205   15168 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5585,"bootTime":1654077630,"procs":365,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:33:35.737334   15168 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:33:35.759559   15168 out.go:177] * [newest-cni-20220601043243-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:33:35.781521   15168 notify.go:193] Checking for updates...
	I0601 04:33:35.803476   15168 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:33:35.825398   15168 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:33:35.847711   15168 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:33:35.876102   15168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:33:35.896119   15168 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:33:35.917997   15168 config.go:178] Loaded profile config "newest-cni-20220601043243-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:33:35.918629   15168 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:33:35.991115   15168 docker.go:137] docker version: linux-20.10.14
	I0601 04:33:35.991240   15168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:33:36.117563   15168 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:33:36.062705506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:33:36.139492   15168 out.go:177] * Using the docker driver based on existing profile
	I0601 04:33:36.161209   15168 start.go:284] selected driver: docker
	I0601 04:33:36.161237   15168 start.go:806] validating driver "docker" against &{Name:newest-cni-20220601043243-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601043243-2342 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[a
piserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:33:36.161459   15168 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:33:36.164798   15168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:33:36.292407   15168 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:33:36.238121209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:33:36.292574   15168 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0601 04:33:36.292590   15168 cni.go:95] Creating CNI manager for ""
	I0601 04:33:36.292599   15168 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:33:36.292612   15168 start_flags.go:306] config:
	{Name:newest-cni-20220601043243-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601043243-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_
ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:33:36.314637   15168 out.go:177] * Starting control plane node newest-cni-20220601043243-2342 in cluster newest-cni-20220601043243-2342
	I0601 04:33:36.336250   15168 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:33:36.358074   15168 out.go:177] * Pulling base image ...
	I0601 04:33:36.400252   15168 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:33:36.400261   15168 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:33:36.400324   15168 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 04:33:36.400340   15168 cache.go:57] Caching tarball of preloaded images
	I0601 04:33:36.400507   15168 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 04:33:36.400537   15168 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 04:33:36.401383   15168 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/config.json ...
	I0601 04:33:36.468470   15168 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:33:36.468517   15168 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:33:36.468530   15168 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:33:36.468582   15168 start.go:352] acquiring machines lock for newest-cni-20220601043243-2342: {Name:mk1c220030e5dc7346d70b8e86adc86c2159451d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:33:36.468682   15168 start.go:356] acquired machines lock for "newest-cni-20220601043243-2342" in 63.546µs
	I0601 04:33:36.468703   15168 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:33:36.468712   15168 fix.go:55] fixHost starting: 
	I0601 04:33:36.468936   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:36.538969   15168 fix.go:103] recreateIfNeeded on newest-cni-20220601043243-2342: state=Stopped err=<nil>
	W0601 04:33:36.539000   15168 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:33:36.560938   15168 out.go:177] * Restarting existing docker container for "newest-cni-20220601043243-2342" ...
	I0601 04:33:36.582691   15168 cli_runner.go:164] Run: docker start newest-cni-20220601043243-2342
	I0601 04:33:36.965186   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:37.040816   15168 kic.go:416] container "newest-cni-20220601043243-2342" state is running.
	I0601 04:33:37.041410   15168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601043243-2342
	I0601 04:33:37.123521   15168 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/config.json ...
	I0601 04:33:37.124035   15168 machine.go:88] provisioning docker machine ...
	I0601 04:33:37.124080   15168 ubuntu.go:169] provisioning hostname "newest-cni-20220601043243-2342"
	I0601 04:33:37.124139   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:37.206841   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:37.207075   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:37.207093   15168 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220601043243-2342 && echo "newest-cni-20220601043243-2342" | sudo tee /etc/hostname
	I0601 04:33:37.334284   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220601043243-2342
	
	I0601 04:33:37.334371   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:37.411022   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:37.411207   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:37.411228   15168 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220601043243-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220601043243-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220601043243-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:33:37.529094   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:33:37.529112   15168 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:33:37.529177   15168 ubuntu.go:177] setting up certificates
	I0601 04:33:37.529187   15168 provision.go:83] configureAuth start
	I0601 04:33:37.529248   15168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601043243-2342
	I0601 04:33:37.607518   15168 provision.go:138] copyHostCerts
	I0601 04:33:37.607606   15168 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:33:37.607615   15168 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:33:37.607703   15168 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:33:37.607912   15168 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:33:37.607923   15168 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:33:37.607982   15168 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:33:37.608149   15168 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:33:37.608155   15168 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:33:37.608215   15168 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:33:37.608334   15168 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220601043243-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220601043243-2342]
	I0601 04:33:37.777252   15168 provision.go:172] copyRemoteCerts
	I0601 04:33:37.777323   15168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:33:37.777383   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:37.854290   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:37.942665   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:33:37.962614   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0601 04:33:37.983784   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 04:33:38.002393   15168 provision.go:86] duration metric: configureAuth took 473.185921ms
	I0601 04:33:38.002406   15168 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:33:38.002590   15168 config.go:178] Loaded profile config "newest-cni-20220601043243-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:33:38.002643   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.076962   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:38.077109   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:38.077122   15168 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:33:38.199559   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:33:38.199574   15168 ubuntu.go:71] root file system type: overlay
	I0601 04:33:38.199704   15168 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:33:38.199770   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.271586   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:38.271747   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:38.271799   15168 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:33:38.400393   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:33:38.400485   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.473348   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:38.473500   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:38.473515   15168 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:33:38.599343   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:33:38.599357   15168 machine.go:91] provisioned docker machine in 1.475295788s
	I0601 04:33:38.599371   15168 start.go:306] post-start starting for "newest-cni-20220601043243-2342" (driver="docker")
	I0601 04:33:38.599376   15168 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:33:38.599429   15168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:33:38.599473   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.672100   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:38.756558   15168 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:33:38.760333   15168 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:33:38.760347   15168 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:33:38.760354   15168 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:33:38.760358   15168 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:33:38.760366   15168 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:33:38.760478   15168 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:33:38.760619   15168 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:33:38.760771   15168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:33:38.767743   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:33:38.784935   15168 start.go:309] post-start completed in 185.552686ms
	I0601 04:33:38.785046   15168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:33:38.785139   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.856795   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:38.942547   15168 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:33:38.946912   15168 fix.go:57] fixHost completed within 2.478168311s
	I0601 04:33:38.946922   15168 start.go:81] releasing machines lock for "newest-cni-20220601043243-2342", held for 2.478201289s
	I0601 04:33:38.946988   15168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601043243-2342
	I0601 04:33:39.020056   15168 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:33:39.020057   15168 ssh_runner.go:195] Run: systemctl --version
	I0601 04:33:39.020128   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:39.020136   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:39.099694   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:39.102563   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:39.184807   15168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:33:39.318352   15168 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:33:39.327971   15168 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:33:39.328023   15168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:33:39.337289   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:33:39.350192   15168 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:33:39.421583   15168 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:33:39.493616   15168 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:33:39.505293   15168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:33:39.575468   15168 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:33:39.585698   15168 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:33:39.621072   15168 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:33:39.702871   15168 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 04:33:39.702997   15168 cli_runner.go:164] Run: docker exec -t newest-cni-20220601043243-2342 dig +short host.docker.internal
	I0601 04:33:39.840169   15168 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:33:39.840253   15168 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:33:39.844483   15168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:33:39.855140   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:39.950021   15168 out.go:177]   - kubelet.network-plugin=cni
	I0601 04:33:39.971828   15168 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0601 04:33:39.993668   15168 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:33:39.993799   15168 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:33:40.025393   15168 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 04:33:40.025408   15168 docker.go:541] Images already preloaded, skipping extraction
	I0601 04:33:40.025484   15168 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:33:40.055373   15168 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 04:33:40.055395   15168 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:33:40.055472   15168 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:33:40.131178   15168 cni.go:95] Creating CNI manager for ""
	I0601 04:33:40.131190   15168 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:33:40.131210   15168 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0601 04:33:40.131223   15168 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220601043243-2342 NodeName:newest-cni-20220601043243-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false]
Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:33:40.131366   15168 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220601043243-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:33:40.131481   15168 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220601043243-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601043243-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 04:33:40.131563   15168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 04:33:40.139314   15168 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:33:40.139369   15168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:33:40.146700   15168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0601 04:33:40.159242   15168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:33:40.173044   15168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2187 bytes)
	I0601 04:33:40.187746   15168 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:33:40.192011   15168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:33:40.202490   15168 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342 for IP: 192.168.49.2
	I0601 04:33:40.202649   15168 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:33:40.202730   15168 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:33:40.202816   15168 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/client.key
	I0601 04:33:40.202874   15168 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/apiserver.key.dd3b5fb2
	I0601 04:33:40.202934   15168 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/proxy-client.key
	I0601 04:33:40.203229   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:33:40.203287   15168 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:33:40.203317   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:33:40.203414   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:33:40.203445   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:33:40.203509   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:33:40.203625   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:33:40.204248   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:33:40.222705   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 04:33:40.240287   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:33:40.261621   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 04:33:40.279108   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:33:40.296730   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:33:40.313701   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:33:40.331521   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:33:40.349032   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:33:40.366996   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:33:40.384175   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:33:40.402174   15168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:33:40.415032   15168 ssh_runner.go:195] Run: openssl version
	I0601 04:33:40.420329   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:33:40.428113   15168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:33:40.432369   15168 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:33:40.432419   15168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:33:40.437889   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:33:40.445633   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:33:40.453598   15168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:33:40.457515   15168 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:33:40.457560   15168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:33:40.462878   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:33:40.470241   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:33:40.478182   15168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:33:40.482194   15168 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:33:40.482243   15168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:33:40.487883   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:33:40.495262   15168 kubeadm.go:395] StartCluster: {Name:newest-cni-20220601043243-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601043243-2342 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_r
unning:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:33:40.495361   15168 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:33:40.525182   15168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:33:40.532579   15168 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:33:40.532592   15168 kubeadm.go:626] restartCluster start
	I0601 04:33:40.532637   15168 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:33:40.539400   15168 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:40.539458   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:40.612589   15168 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220601043243-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:33:40.612763   15168 kubeconfig.go:127] "newest-cni-20220601043243-2342" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 04:33:40.613129   15168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:33:40.614481   15168 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:33:40.622686   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:40.622778   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:40.631362   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:40.833513   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:40.833669   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:40.844419   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.032599   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.032778   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.043272   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.233485   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.233633   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.245520   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.432039   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.432199   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.442259   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.632036   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.632182   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.643253   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.832061   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.832163   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.842628   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.032064   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.032280   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.043104   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.232141   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.232275   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.242877   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.431444   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.431558   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.442433   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.633665   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.633798   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.644236   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.833678   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.833769   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.844387   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.032052   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.032199   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.042815   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.231989   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.232051   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.241645   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.433549   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.433737   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.444478   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.633562   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.633776   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.644352   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.644361   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.644405   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.652306   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.652317   15168 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 04:33:43.652325   15168 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:33:43.652377   15168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:33:43.683927   15168 docker.go:442] Stopping containers: [57d0d227b400 5b9b2242ae33 4c156d0546e3 5112b8a0b836 c3bb43e0ee6a bd5a43523de9 dbd0a440cdba 5263511ddfa5 e696f119d3b9 167c5c91c499 5eefec557a4a f583206f062e 5c2a8150bc25 6c3cf6adcbfe a101a3806651 23cd8f73e35d 1a71bae23aeb]
	I0601 04:33:43.684003   15168 ssh_runner.go:195] Run: docker stop 57d0d227b400 5b9b2242ae33 4c156d0546e3 5112b8a0b836 c3bb43e0ee6a bd5a43523de9 dbd0a440cdba 5263511ddfa5 e696f119d3b9 167c5c91c499 5eefec557a4a f583206f062e 5c2a8150bc25 6c3cf6adcbfe a101a3806651 23cd8f73e35d 1a71bae23aeb
	I0601 04:33:43.715513   15168 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:33:43.725843   15168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:33:43.733397   15168 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Jun  1 11:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:32 /etc/kubernetes/scheduler.conf
	
	I0601 04:33:43.733445   15168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 04:33:43.740565   15168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 04:33:43.747822   15168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 04:33:43.754907   15168 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.754970   15168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 04:33:43.762172   15168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 04:33:43.769712   15168 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.769754   15168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 04:33:43.776624   15168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:33:43.784322   15168 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:33:43.784331   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:43.833113   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:44.699448   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:44.829515   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:44.877837   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:44.927753   15168 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:33:44.927821   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:45.437108   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:45.937054   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:46.437405   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:46.487754   15168 api_server.go:71] duration metric: took 1.559980599s to wait for apiserver process to appear ...
	I0601 04:33:46.487776   15168 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:33:46.487791   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:49.450097   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 04:33:49.450114   15168 api_server.go:102] status: https://127.0.0.1:55529/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 04:33:49.951698   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:49.957645   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:33:49.957661   15168 api_server.go:102] status: https://127.0.0.1:55529/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:33:50.450344   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:50.455504   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:33:50.455516   15168 api_server.go:102] status: https://127.0.0.1:55529/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:33:50.950347   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:50.956284   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 200:
	ok
	I0601 04:33:50.963370   15168 api_server.go:140] control plane version: v1.23.6
	I0601 04:33:50.963383   15168 api_server.go:130] duration metric: took 4.475543754s to wait for apiserver health ...
	I0601 04:33:50.963389   15168 cni.go:95] Creating CNI manager for ""
	I0601 04:33:50.963393   15168 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:33:50.963402   15168 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:33:50.971086   15168 system_pods.go:59] 9 kube-system pods found
	I0601 04:33:50.971106   15168 system_pods.go:61] "coredns-64897985d-blq67" [ded91fd2-d2c9-4420-9f11-7eab7d7a70cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 04:33:50.971112   15168 system_pods.go:61] "coredns-64897985d-svsmk" [d6d0a06b-bb5a-461b-99d5-7b2fd6320947] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 04:33:50.971120   15168 system_pods.go:61] "etcd-newest-cni-20220601043243-2342" [5d33aabb-0215-438c-ad10-61ba084cc15f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 04:33:50.971129   15168 system_pods.go:61] "kube-apiserver-newest-cni-20220601043243-2342" [8c56d510-5f64-431d-8954-8c3cf47404a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 04:33:50.971135   15168 system_pods.go:61] "kube-controller-manager-newest-cni-20220601043243-2342" [0b2a261d-dcd5-4705-b2fb-db51ba34d827] Running
	I0601 04:33:50.971138   15168 system_pods.go:61] "kube-proxy-br6ph" [788e299a-04d3-43a8-bf6b-c0e52acbcd4a] Running
	I0601 04:33:50.971142   15168 system_pods.go:61] "kube-scheduler-newest-cni-20220601043243-2342" [16e28b92-b394-42e6-bed5-ea1917414ae2] Running
	I0601 04:33:50.971146   15168 system_pods.go:61] "metrics-server-b955d9d8-9qrh2" [37627389-19ca-44a3-b5a8-a0aff226824d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:33:50.971153   15168 system_pods.go:61] "storage-provisioner" [ab053075-62ac-43ac-b212-ba5bfef0faef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 04:33:50.971157   15168 system_pods.go:74] duration metric: took 7.751202ms to wait for pod list to return data ...
	I0601 04:33:50.971164   15168 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:33:50.975532   15168 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:33:50.975549   15168 node_conditions.go:123] node cpu capacity is 6
	I0601 04:33:50.975561   15168 node_conditions.go:105] duration metric: took 4.393013ms to run NodePressure ...
	I0601 04:33:50.975577   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:51.213450   15168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 04:33:51.224417   15168 ops.go:34] apiserver oom_adj: -16
	I0601 04:33:51.224430   15168 kubeadm.go:630] restartCluster took 10.691695543s
	I0601 04:33:51.224437   15168 kubeadm.go:397] StartCluster complete in 10.729042681s
	I0601 04:33:51.224455   15168 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:33:51.224559   15168 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:33:51.225197   15168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:33:51.229821   15168 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220601043243-2342" rescaled to 1
	I0601 04:33:51.229862   15168 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:33:51.229891   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:33:51.251907   15168 out.go:177] * Verifying Kubernetes components...
	I0601 04:33:51.229903   15168 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 04:33:51.230072   15168 config.go:178] Loaded profile config "newest-cni-20220601043243-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:33:51.294534   15168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:33:51.294549   15168 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220601043243-2342"
	I0601 04:33:51.294558   15168 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220601043243-2342"
	I0601 04:33:51.294565   15168 addons.go:65] Setting dashboard=true in profile "newest-cni-20220601043243-2342"
	I0601 04:33:51.294576   15168 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220601043243-2342"
	I0601 04:33:51.294582   15168 addons.go:153] Setting addon dashboard=true in "newest-cni-20220601043243-2342"
	W0601 04:33:51.294589   15168 addons.go:165] addon metrics-server should already be in state true
	I0601 04:33:51.294592   15168 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220601043243-2342"
	W0601 04:33:51.294604   15168 addons.go:165] addon storage-provisioner should already be in state true
	I0601 04:33:51.294553   15168 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220601043243-2342"
	W0601 04:33:51.294616   15168 addons.go:165] addon dashboard should already be in state true
	I0601 04:33:51.294629   15168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220601043243-2342"
	I0601 04:33:51.294668   15168 host.go:66] Checking if "newest-cni-20220601043243-2342" exists ...
	I0601 04:33:51.294672   15168 host.go:66] Checking if "newest-cni-20220601043243-2342" exists ...
	I0601 04:33:51.294679   15168 host.go:66] Checking if "newest-cni-20220601043243-2342" exists ...
	I0601 04:33:51.295056   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.295147   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.295159   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.295281   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.425427   15168 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220601043243-2342"
	I0601 04:33:51.476401   15168 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	W0601 04:33:51.476435   15168 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:33:51.455598   15168 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 04:33:51.534679   15168 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 04:33:51.645866   15168 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 04:33:51.513660   15168 host.go:66] Checking if "newest-cni-20220601043243-2342" exists ...
	I0601 04:33:51.571610   15168 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:33:51.608663   15168 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 04:33:51.667451   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 04:33:51.667499   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:33:51.667521   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 04:33:51.667530   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 04:33:51.667553   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.667572   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.667577   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.671586   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.681072   15168 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 04:33:51.681150   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.875389   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:51.876872   15168 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:33:51.876905   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:33:51.877044   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.877013   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:51.877104   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:51.879062   15168 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:33:51.879531   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:51.896460   15168 api_server.go:71] duration metric: took 666.56763ms to wait for apiserver process to appear ...
	I0601 04:33:51.896493   15168 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:33:51.896513   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:51.905685   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 200:
	ok
	I0601 04:33:51.907893   15168 api_server.go:140] control plane version: v1.23.6
	I0601 04:33:51.907906   15168 api_server.go:130] duration metric: took 11.403704ms to wait for apiserver health ...
	I0601 04:33:51.907913   15168 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:33:51.917780   15168 system_pods.go:59] 9 kube-system pods found
	I0601 04:33:51.917804   15168 system_pods.go:61] "coredns-64897985d-blq67" [ded91fd2-d2c9-4420-9f11-7eab7d7a70cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 04:33:51.917831   15168 system_pods.go:61] "coredns-64897985d-svsmk" [d6d0a06b-bb5a-461b-99d5-7b2fd6320947] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 04:33:51.917861   15168 system_pods.go:61] "etcd-newest-cni-20220601043243-2342" [5d33aabb-0215-438c-ad10-61ba084cc15f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 04:33:51.917884   15168 system_pods.go:61] "kube-apiserver-newest-cni-20220601043243-2342" [8c56d510-5f64-431d-8954-8c3cf47404a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 04:33:51.917903   15168 system_pods.go:61] "kube-controller-manager-newest-cni-20220601043243-2342" [0b2a261d-dcd5-4705-b2fb-db51ba34d827] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 04:33:51.917910   15168 system_pods.go:61] "kube-proxy-br6ph" [788e299a-04d3-43a8-bf6b-c0e52acbcd4a] Running
	I0601 04:33:51.917947   15168 system_pods.go:61] "kube-scheduler-newest-cni-20220601043243-2342" [16e28b92-b394-42e6-bed5-ea1917414ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 04:33:51.917958   15168 system_pods.go:61] "metrics-server-b955d9d8-9qrh2" [37627389-19ca-44a3-b5a8-a0aff226824d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:33:51.917967   15168 system_pods.go:61] "storage-provisioner" [ab053075-62ac-43ac-b212-ba5bfef0faef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 04:33:51.917973   15168 system_pods.go:74] duration metric: took 10.05572ms to wait for pod list to return data ...
	I0601 04:33:51.917981   15168 default_sa.go:34] waiting for default service account to be created ...
	I0601 04:33:51.922534   15168 default_sa.go:45] found service account: "default"
	I0601 04:33:51.922552   15168 default_sa.go:55] duration metric: took 4.563505ms for default service account to be created ...
	I0601 04:33:51.922563   15168 kubeadm.go:572] duration metric: took 692.673499ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0601 04:33:51.922586   15168 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:33:51.926230   15168 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:33:51.926242   15168 node_conditions.go:123] node cpu capacity is 6
	I0601 04:33:51.926250   15168 node_conditions.go:105] duration metric: took 3.660356ms to run NodePressure ...
	I0601 04:33:51.926258   15168 start.go:213] waiting for startup goroutines ...
	I0601 04:33:51.969132   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:51.996898   15168 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 04:33:51.996917   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 04:33:52.004032   15168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:33:52.010449   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 04:33:52.010463   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 04:33:52.021734   15168 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 04:33:52.021755   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 04:33:52.080927   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 04:33:52.080944   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 04:33:52.096221   15168 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:33:52.096238   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 04:33:52.103028   15168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:33:52.108213   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 04:33:52.108227   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 04:33:52.120168   15168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:33:52.187777   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 04:33:52.187789   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 04:33:52.218467   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 04:33:52.218483   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 04:33:52.297544   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 04:33:52.297560   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 04:33:52.387473   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 04:33:52.387493   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 04:33:52.414422   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 04:33:52.414440   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 04:33:52.436622   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:33:52.436636   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 04:33:52.497411   15168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:33:53.406044   15168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.401973256s)
	I0601 04:33:53.406130   15168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.303070551s)
	I0601 04:33:53.423155   15168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.302944456s)
	I0601 04:33:53.423183   15168 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220601043243-2342"
	I0601 04:33:53.599892   15168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.102440107s)
	I0601 04:33:53.662455   15168 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 04:33:53.699587   15168 addons.go:417] enableAddons completed in 2.469643476s
	I0601 04:33:53.740112   15168 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 04:33:53.763371   15168 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220601043243-2342" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:14:29 UTC, end at Wed 2022-06-01 11:41:28 UTC. --
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 systemd[1]: Starting Docker Application Container Engine...
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.661521825Z" level=info msg="Starting up"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.663342504Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.663395200Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.663411000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.663419036Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.664701040Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.664730081Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.664742618Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.664754909Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.669344312Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.673789964Z" level=info msg="Loading containers: start."
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.759102419Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.791878604Z" level=info msg="Loading containers: done."
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.800298543Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.800366770Z" level=info msg="Daemon has completed initialization"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 systemd[1]: Started Docker Application Container Engine.
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.826706081Z" level=info msg="API listen on [::]:2376"
	Jun 01 11:14:29 old-k8s-version-20220601040844-2342 dockerd[130]: time="2022-06-01T11:14:29.829430983Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-06-01T11:41:30Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  11:41:30 up  1:22,  0 users,  load average: 0.28, 0.68, 0.80
	Linux old-k8s-version-20220601040844-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:14:29 UTC, end at Wed 2022-06-01 11:41:31 UTC. --
	Jun 01 11:41:29 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34173]: I0601 11:41:30.172864   34173 server.go:410] Version: v1.16.0
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34173]: I0601 11:41:30.173066   34173 plugins.go:100] No cloud provider specified.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34173]: I0601 11:41:30.173078   34173 server.go:773] Client rotation is on, will bootstrap in background
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34173]: I0601 11:41:30.174864   34173 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34173]: W0601 11:41:30.175657   34173 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34173]: W0601 11:41:30.175749   34173 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34173]: F0601 11:41:30.175805   34173 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1669.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34210]: I0601 11:41:30.922389   34210 server.go:410] Version: v1.16.0
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34210]: I0601 11:41:30.922776   34210 plugins.go:100] No cloud provider specified.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34210]: I0601 11:41:30.922839   34210 server.go:773] Client rotation is on, will bootstrap in background
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34210]: I0601 11:41:30.924660   34210 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34210]: W0601 11:41:30.925302   34210 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34210]: W0601 11:41:30.925443   34210 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 kubelet[34210]: F0601 11:41:30.925523   34210 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 01 11:41:30 old-k8s-version-20220601040844-2342 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 04:41:30.739678   15573 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 2 (443.862728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220601040844-2342" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (555.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (50.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220601043243-2342 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342
E0601 04:34:00.636775    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342: exit status 2 (16.114934218s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342: exit status 2 (16.107960287s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220601043243-2342 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220601043243-2342
helpers_test.go:235: (dbg) docker inspect newest-cni-20220601043243-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "39500d2a5407bc9d09353766f117699399118c7230ecba55053f4104f94ccc6e",
	        "Created": "2022-06-01T11:32:49.891079452Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272435,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:33:36.97014231Z",
	            "FinishedAt": "2022-06-01T11:33:35.066311713Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/39500d2a5407bc9d09353766f117699399118c7230ecba55053f4104f94ccc6e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/39500d2a5407bc9d09353766f117699399118c7230ecba55053f4104f94ccc6e/hostname",
	        "HostsPath": "/var/lib/docker/containers/39500d2a5407bc9d09353766f117699399118c7230ecba55053f4104f94ccc6e/hosts",
	        "LogPath": "/var/lib/docker/containers/39500d2a5407bc9d09353766f117699399118c7230ecba55053f4104f94ccc6e/39500d2a5407bc9d09353766f117699399118c7230ecba55053f4104f94ccc6e-json.log",
	        "Name": "/newest-cni-20220601043243-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220601043243-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220601043243-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0ec8cff8d5390a0578938c22f8ef6ca5ae52f76ef9dab0ef9c221a9afc25ab5-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0ec8cff8d5390a0578938c22f8ef6ca5ae52f76ef9dab0ef9c221a9afc25ab5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0ec8cff8d5390a0578938c22f8ef6ca5ae52f76ef9dab0ef9c221a9afc25ab5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0ec8cff8d5390a0578938c22f8ef6ca5ae52f76ef9dab0ef9c221a9afc25ab5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220601043243-2342",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220601043243-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220601043243-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220601043243-2342",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220601043243-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fc47614dc08aaf004218d5d4b46fde91f97191d53790fdce2ed0a1daaacecf0c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55526"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55528"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55529"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fc47614dc08a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220601043243-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "39500d2a5407",
	                        "newest-cni-20220601043243-2342"
	                    ],
	                    "NetworkID": "541a970f52a93c1bca2cd182e71499f265458bdd4da587edb656b740fd0f196d",
	                    "EndpointID": "3cec8b3a5976683c2d928b2c6302bd336080ba19858a2e21c5a6400a301cd916",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220601043243-2342 logs -n 25
E0601 04:34:31.675171    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220601043243-2342 logs -n 25: (4.220969754s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | no-preload-20220601041659-2342                             | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | no-preload-20220601041659-2342                             | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                             |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:25 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:25 PDT | 01 Jun 22 04:25 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:25 PDT | 01 Jun 22 04:26 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:26 PDT | 01 Jun 22 04:26 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:26 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:31 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:31 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601040844-2342                        | old-k8s-version-20220601040844-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601042455-2342             | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601042455-2342             | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601043243-2342 --memory=2200            | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:33 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601043243-2342 --memory=2200            | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:34 PDT | 01 Jun 22 04:34 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:33:35
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:33:35.720618   15168 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:33:35.720870   15168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:33:35.720880   15168 out.go:309] Setting ErrFile to fd 2...
	I0601 04:33:35.720890   15168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:33:35.721020   15168 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:33:35.721322   15168 out.go:303] Setting JSON to false
	I0601 04:33:35.737205   15168 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5585,"bootTime":1654077630,"procs":365,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:33:35.737334   15168 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:33:35.759559   15168 out.go:177] * [newest-cni-20220601043243-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:33:35.781521   15168 notify.go:193] Checking for updates...
	I0601 04:33:35.803476   15168 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:33:35.825398   15168 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:33:35.847711   15168 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:33:35.876102   15168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:33:35.896119   15168 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:33:35.917997   15168 config.go:178] Loaded profile config "newest-cni-20220601043243-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:33:35.918629   15168 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:33:35.991115   15168 docker.go:137] docker version: linux-20.10.14
	I0601 04:33:35.991240   15168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:33:36.117563   15168 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:33:36.062705506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:33:36.139492   15168 out.go:177] * Using the docker driver based on existing profile
	I0601 04:33:36.161209   15168 start.go:284] selected driver: docker
	I0601 04:33:36.161237   15168 start.go:806] validating driver "docker" against &{Name:newest-cni-20220601043243-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601043243-2342 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[a
piserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:33:36.161459   15168 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:33:36.164798   15168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:33:36.292407   15168 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:33:36.238121209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:33:36.292574   15168 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0601 04:33:36.292590   15168 cni.go:95] Creating CNI manager for ""
	I0601 04:33:36.292599   15168 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:33:36.292612   15168 start_flags.go:306] config:
	{Name:newest-cni-20220601043243-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601043243-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_
ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:33:36.314637   15168 out.go:177] * Starting control plane node newest-cni-20220601043243-2342 in cluster newest-cni-20220601043243-2342
	I0601 04:33:36.336250   15168 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:33:36.358074   15168 out.go:177] * Pulling base image ...
	I0601 04:33:36.400252   15168 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:33:36.400261   15168 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:33:36.400324   15168 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 04:33:36.400340   15168 cache.go:57] Caching tarball of preloaded images
	I0601 04:33:36.400507   15168 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 04:33:36.400537   15168 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 04:33:36.401383   15168 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/config.json ...
	I0601 04:33:36.468470   15168 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:33:36.468517   15168 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:33:36.468530   15168 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:33:36.468582   15168 start.go:352] acquiring machines lock for newest-cni-20220601043243-2342: {Name:mk1c220030e5dc7346d70b8e86adc86c2159451d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:33:36.468682   15168 start.go:356] acquired machines lock for "newest-cni-20220601043243-2342" in 63.546µs
	I0601 04:33:36.468703   15168 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:33:36.468712   15168 fix.go:55] fixHost starting: 
	I0601 04:33:36.468936   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:36.538969   15168 fix.go:103] recreateIfNeeded on newest-cni-20220601043243-2342: state=Stopped err=<nil>
	W0601 04:33:36.539000   15168 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:33:36.560938   15168 out.go:177] * Restarting existing docker container for "newest-cni-20220601043243-2342" ...
	I0601 04:33:36.582691   15168 cli_runner.go:164] Run: docker start newest-cni-20220601043243-2342
	I0601 04:33:36.965186   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:37.040816   15168 kic.go:416] container "newest-cni-20220601043243-2342" state is running.
	I0601 04:33:37.041410   15168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601043243-2342
	I0601 04:33:37.123521   15168 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/config.json ...
	I0601 04:33:37.124035   15168 machine.go:88] provisioning docker machine ...
	I0601 04:33:37.124080   15168 ubuntu.go:169] provisioning hostname "newest-cni-20220601043243-2342"
	I0601 04:33:37.124139   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:37.206841   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:37.207075   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:37.207093   15168 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220601043243-2342 && echo "newest-cni-20220601043243-2342" | sudo tee /etc/hostname
	I0601 04:33:37.334284   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220601043243-2342
	
	I0601 04:33:37.334371   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:37.411022   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:37.411207   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:37.411228   15168 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220601043243-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220601043243-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220601043243-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:33:37.529094   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:33:37.529112   15168 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:33:37.529177   15168 ubuntu.go:177] setting up certificates
	I0601 04:33:37.529187   15168 provision.go:83] configureAuth start
	I0601 04:33:37.529248   15168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601043243-2342
	I0601 04:33:37.607518   15168 provision.go:138] copyHostCerts
	I0601 04:33:37.607606   15168 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:33:37.607615   15168 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:33:37.607703   15168 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:33:37.607912   15168 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:33:37.607923   15168 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:33:37.607982   15168 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:33:37.608149   15168 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:33:37.608155   15168 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:33:37.608215   15168 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:33:37.608334   15168 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220601043243-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220601043243-2342]
	I0601 04:33:37.777252   15168 provision.go:172] copyRemoteCerts
	I0601 04:33:37.777323   15168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:33:37.777383   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:37.854290   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:37.942665   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:33:37.962614   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0601 04:33:37.983784   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 04:33:38.002393   15168 provision.go:86] duration metric: configureAuth took 473.185921ms
	I0601 04:33:38.002406   15168 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:33:38.002590   15168 config.go:178] Loaded profile config "newest-cni-20220601043243-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:33:38.002643   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.076962   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:38.077109   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:38.077122   15168 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:33:38.199559   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:33:38.199574   15168 ubuntu.go:71] root file system type: overlay
	I0601 04:33:38.199704   15168 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:33:38.199770   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.271586   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:38.271747   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:38.271799   15168 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:33:38.400393   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:33:38.400485   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.473348   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:38.473500   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:38.473515   15168 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:33:38.599343   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:33:38.599357   15168 machine.go:91] provisioned docker machine in 1.475295788s
	I0601 04:33:38.599371   15168 start.go:306] post-start starting for "newest-cni-20220601043243-2342" (driver="docker")
	I0601 04:33:38.599376   15168 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:33:38.599429   15168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:33:38.599473   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.672100   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:38.756558   15168 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:33:38.760333   15168 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:33:38.760347   15168 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:33:38.760354   15168 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:33:38.760358   15168 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:33:38.760366   15168 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:33:38.760478   15168 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:33:38.760619   15168 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:33:38.760771   15168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:33:38.767743   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:33:38.784935   15168 start.go:309] post-start completed in 185.552686ms
	I0601 04:33:38.785046   15168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:33:38.785139   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.856795   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:38.942547   15168 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:33:38.946912   15168 fix.go:57] fixHost completed within 2.478168311s
	I0601 04:33:38.946922   15168 start.go:81] releasing machines lock for "newest-cni-20220601043243-2342", held for 2.478201289s
	I0601 04:33:38.946988   15168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601043243-2342
	I0601 04:33:39.020056   15168 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:33:39.020057   15168 ssh_runner.go:195] Run: systemctl --version
	I0601 04:33:39.020128   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:39.020136   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:39.099694   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:39.102563   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:39.184807   15168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:33:39.318352   15168 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:33:39.327971   15168 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:33:39.328023   15168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:33:39.337289   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:33:39.350192   15168 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:33:39.421583   15168 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:33:39.493616   15168 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:33:39.505293   15168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:33:39.575468   15168 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:33:39.585698   15168 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:33:39.621072   15168 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:33:39.702871   15168 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 04:33:39.702997   15168 cli_runner.go:164] Run: docker exec -t newest-cni-20220601043243-2342 dig +short host.docker.internal
	I0601 04:33:39.840169   15168 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:33:39.840253   15168 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:33:39.844483   15168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:33:39.855140   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:39.950021   15168 out.go:177]   - kubelet.network-plugin=cni
	I0601 04:33:39.971828   15168 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0601 04:33:39.993668   15168 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:33:39.993799   15168 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:33:40.025393   15168 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 04:33:40.025408   15168 docker.go:541] Images already preloaded, skipping extraction
	I0601 04:33:40.025484   15168 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:33:40.055373   15168 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 04:33:40.055395   15168 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:33:40.055472   15168 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:33:40.131178   15168 cni.go:95] Creating CNI manager for ""
	I0601 04:33:40.131190   15168 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:33:40.131210   15168 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0601 04:33:40.131223   15168 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220601043243-2342 NodeName:newest-cni-20220601043243-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false]
Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:33:40.131366   15168 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220601043243-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:33:40.131481   15168 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220601043243-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601043243-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 04:33:40.131563   15168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 04:33:40.139314   15168 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:33:40.139369   15168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:33:40.146700   15168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0601 04:33:40.159242   15168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:33:40.173044   15168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2187 bytes)
	I0601 04:33:40.187746   15168 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:33:40.192011   15168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:33:40.202490   15168 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342 for IP: 192.168.49.2
	I0601 04:33:40.202649   15168 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:33:40.202730   15168 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:33:40.202816   15168 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/client.key
	I0601 04:33:40.202874   15168 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/apiserver.key.dd3b5fb2
	I0601 04:33:40.202934   15168 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/proxy-client.key
	I0601 04:33:40.203229   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:33:40.203287   15168 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:33:40.203317   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:33:40.203414   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:33:40.203445   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:33:40.203509   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:33:40.203625   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:33:40.204248   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:33:40.222705   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 04:33:40.240287   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:33:40.261621   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 04:33:40.279108   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:33:40.296730   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:33:40.313701   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:33:40.331521   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:33:40.349032   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:33:40.366996   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:33:40.384175   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:33:40.402174   15168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:33:40.415032   15168 ssh_runner.go:195] Run: openssl version
	I0601 04:33:40.420329   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:33:40.428113   15168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:33:40.432369   15168 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:33:40.432419   15168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:33:40.437889   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:33:40.445633   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:33:40.453598   15168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:33:40.457515   15168 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:33:40.457560   15168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:33:40.462878   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:33:40.470241   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:33:40.478182   15168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:33:40.482194   15168 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:33:40.482243   15168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:33:40.487883   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:33:40.495262   15168 kubeadm.go:395] StartCluster: {Name:newest-cni-20220601043243-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601043243-2342 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_r
unning:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:33:40.495361   15168 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:33:40.525182   15168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:33:40.532579   15168 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:33:40.532592   15168 kubeadm.go:626] restartCluster start
	I0601 04:33:40.532637   15168 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:33:40.539400   15168 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:40.539458   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:40.612589   15168 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220601043243-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:33:40.612763   15168 kubeconfig.go:127] "newest-cni-20220601043243-2342" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 04:33:40.613129   15168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:33:40.614481   15168 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:33:40.622686   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:40.622778   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:40.631362   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:40.833513   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:40.833669   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:40.844419   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.032599   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.032778   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.043272   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.233485   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.233633   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.245520   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.432039   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.432199   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.442259   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.632036   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.632182   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.643253   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.832061   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.832163   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.842628   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.032064   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.032280   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.043104   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.232141   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.232275   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.242877   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.431444   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.431558   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.442433   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.633665   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.633798   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.644236   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.833678   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.833769   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.844387   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.032052   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.032199   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.042815   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.231989   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.232051   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.241645   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.433549   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.433737   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.444478   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.633562   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.633776   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.644352   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.644361   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.644405   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.652306   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.652317   15168 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 04:33:43.652325   15168 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:33:43.652377   15168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:33:43.683927   15168 docker.go:442] Stopping containers: [57d0d227b400 5b9b2242ae33 4c156d0546e3 5112b8a0b836 c3bb43e0ee6a bd5a43523de9 dbd0a440cdba 5263511ddfa5 e696f119d3b9 167c5c91c499 5eefec557a4a f583206f062e 5c2a8150bc25 6c3cf6adcbfe a101a3806651 23cd8f73e35d 1a71bae23aeb]
	I0601 04:33:43.684003   15168 ssh_runner.go:195] Run: docker stop 57d0d227b400 5b9b2242ae33 4c156d0546e3 5112b8a0b836 c3bb43e0ee6a bd5a43523de9 dbd0a440cdba 5263511ddfa5 e696f119d3b9 167c5c91c499 5eefec557a4a f583206f062e 5c2a8150bc25 6c3cf6adcbfe a101a3806651 23cd8f73e35d 1a71bae23aeb
	I0601 04:33:43.715513   15168 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:33:43.725843   15168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:33:43.733397   15168 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Jun  1 11:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:32 /etc/kubernetes/scheduler.conf
	
	I0601 04:33:43.733445   15168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 04:33:43.740565   15168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 04:33:43.747822   15168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 04:33:43.754907   15168 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.754970   15168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 04:33:43.762172   15168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 04:33:43.769712   15168 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.769754   15168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 04:33:43.776624   15168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:33:43.784322   15168 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:33:43.784331   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:43.833113   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:44.699448   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:44.829515   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:44.877837   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:44.927753   15168 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:33:44.927821   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:45.437108   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:45.937054   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:46.437405   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:46.487754   15168 api_server.go:71] duration metric: took 1.559980599s to wait for apiserver process to appear ...
	I0601 04:33:46.487776   15168 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:33:46.487791   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:49.450097   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 04:33:49.450114   15168 api_server.go:102] status: https://127.0.0.1:55529/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 04:33:49.951698   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:49.957645   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:33:49.957661   15168 api_server.go:102] status: https://127.0.0.1:55529/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:33:50.450344   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:50.455504   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:33:50.455516   15168 api_server.go:102] status: https://127.0.0.1:55529/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:33:50.950347   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:50.956284   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 200:
	ok
	I0601 04:33:50.963370   15168 api_server.go:140] control plane version: v1.23.6
	I0601 04:33:50.963383   15168 api_server.go:130] duration metric: took 4.475543754s to wait for apiserver health ...
	I0601 04:33:50.963389   15168 cni.go:95] Creating CNI manager for ""
	I0601 04:33:50.963393   15168 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:33:50.963402   15168 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:33:50.971086   15168 system_pods.go:59] 9 kube-system pods found
	I0601 04:33:50.971106   15168 system_pods.go:61] "coredns-64897985d-blq67" [ded91fd2-d2c9-4420-9f11-7eab7d7a70cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 04:33:50.971112   15168 system_pods.go:61] "coredns-64897985d-svsmk" [d6d0a06b-bb5a-461b-99d5-7b2fd6320947] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 04:33:50.971120   15168 system_pods.go:61] "etcd-newest-cni-20220601043243-2342" [5d33aabb-0215-438c-ad10-61ba084cc15f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 04:33:50.971129   15168 system_pods.go:61] "kube-apiserver-newest-cni-20220601043243-2342" [8c56d510-5f64-431d-8954-8c3cf47404a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 04:33:50.971135   15168 system_pods.go:61] "kube-controller-manager-newest-cni-20220601043243-2342" [0b2a261d-dcd5-4705-b2fb-db51ba34d827] Running
	I0601 04:33:50.971138   15168 system_pods.go:61] "kube-proxy-br6ph" [788e299a-04d3-43a8-bf6b-c0e52acbcd4a] Running
	I0601 04:33:50.971142   15168 system_pods.go:61] "kube-scheduler-newest-cni-20220601043243-2342" [16e28b92-b394-42e6-bed5-ea1917414ae2] Running
	I0601 04:33:50.971146   15168 system_pods.go:61] "metrics-server-b955d9d8-9qrh2" [37627389-19ca-44a3-b5a8-a0aff226824d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:33:50.971153   15168 system_pods.go:61] "storage-provisioner" [ab053075-62ac-43ac-b212-ba5bfef0faef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 04:33:50.971157   15168 system_pods.go:74] duration metric: took 7.751202ms to wait for pod list to return data ...
	I0601 04:33:50.971164   15168 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:33:50.975532   15168 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:33:50.975549   15168 node_conditions.go:123] node cpu capacity is 6
	I0601 04:33:50.975561   15168 node_conditions.go:105] duration metric: took 4.393013ms to run NodePressure ...
	I0601 04:33:50.975577   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:51.213450   15168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 04:33:51.224417   15168 ops.go:34] apiserver oom_adj: -16
	I0601 04:33:51.224430   15168 kubeadm.go:630] restartCluster took 10.691695543s
	I0601 04:33:51.224437   15168 kubeadm.go:397] StartCluster complete in 10.729042681s
	I0601 04:33:51.224455   15168 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:33:51.224559   15168 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:33:51.225197   15168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:33:51.229821   15168 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220601043243-2342" rescaled to 1
	I0601 04:33:51.229862   15168 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:33:51.229891   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:33:51.251907   15168 out.go:177] * Verifying Kubernetes components...
	I0601 04:33:51.229903   15168 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 04:33:51.230072   15168 config.go:178] Loaded profile config "newest-cni-20220601043243-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:33:51.294534   15168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:33:51.294549   15168 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220601043243-2342"
	I0601 04:33:51.294558   15168 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220601043243-2342"
	I0601 04:33:51.294565   15168 addons.go:65] Setting dashboard=true in profile "newest-cni-20220601043243-2342"
	I0601 04:33:51.294576   15168 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220601043243-2342"
	I0601 04:33:51.294582   15168 addons.go:153] Setting addon dashboard=true in "newest-cni-20220601043243-2342"
	W0601 04:33:51.294589   15168 addons.go:165] addon metrics-server should already be in state true
	I0601 04:33:51.294592   15168 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220601043243-2342"
	W0601 04:33:51.294604   15168 addons.go:165] addon storage-provisioner should already be in state true
	I0601 04:33:51.294553   15168 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220601043243-2342"
	W0601 04:33:51.294616   15168 addons.go:165] addon dashboard should already be in state true
	I0601 04:33:51.294629   15168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220601043243-2342"
	I0601 04:33:51.294668   15168 host.go:66] Checking if "newest-cni-20220601043243-2342" exists ...
	I0601 04:33:51.294672   15168 host.go:66] Checking if "newest-cni-20220601043243-2342" exists ...
	I0601 04:33:51.294679   15168 host.go:66] Checking if "newest-cni-20220601043243-2342" exists ...
	I0601 04:33:51.295056   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.295147   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.295159   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.295281   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.425427   15168 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220601043243-2342"
	I0601 04:33:51.476401   15168 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	W0601 04:33:51.476435   15168 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:33:51.455598   15168 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 04:33:51.534679   15168 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 04:33:51.645866   15168 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 04:33:51.513660   15168 host.go:66] Checking if "newest-cni-20220601043243-2342" exists ...
	I0601 04:33:51.571610   15168 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:33:51.608663   15168 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 04:33:51.667451   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 04:33:51.667499   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:33:51.667521   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 04:33:51.667530   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 04:33:51.667553   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.667572   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.667577   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.671586   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.681072   15168 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 04:33:51.681150   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.875389   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:51.876872   15168 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:33:51.876905   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:33:51.877044   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.877013   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:51.877104   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:51.879062   15168 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:33:51.879531   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:51.896460   15168 api_server.go:71] duration metric: took 666.56763ms to wait for apiserver process to appear ...
	I0601 04:33:51.896493   15168 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:33:51.896513   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:51.905685   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 200:
	ok
	I0601 04:33:51.907893   15168 api_server.go:140] control plane version: v1.23.6
	I0601 04:33:51.907906   15168 api_server.go:130] duration metric: took 11.403704ms to wait for apiserver health ...
	I0601 04:33:51.907913   15168 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:33:51.917780   15168 system_pods.go:59] 9 kube-system pods found
	I0601 04:33:51.917804   15168 system_pods.go:61] "coredns-64897985d-blq67" [ded91fd2-d2c9-4420-9f11-7eab7d7a70cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 04:33:51.917831   15168 system_pods.go:61] "coredns-64897985d-svsmk" [d6d0a06b-bb5a-461b-99d5-7b2fd6320947] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 04:33:51.917861   15168 system_pods.go:61] "etcd-newest-cni-20220601043243-2342" [5d33aabb-0215-438c-ad10-61ba084cc15f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 04:33:51.917884   15168 system_pods.go:61] "kube-apiserver-newest-cni-20220601043243-2342" [8c56d510-5f64-431d-8954-8c3cf47404a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 04:33:51.917903   15168 system_pods.go:61] "kube-controller-manager-newest-cni-20220601043243-2342" [0b2a261d-dcd5-4705-b2fb-db51ba34d827] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 04:33:51.917910   15168 system_pods.go:61] "kube-proxy-br6ph" [788e299a-04d3-43a8-bf6b-c0e52acbcd4a] Running
	I0601 04:33:51.917947   15168 system_pods.go:61] "kube-scheduler-newest-cni-20220601043243-2342" [16e28b92-b394-42e6-bed5-ea1917414ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 04:33:51.917958   15168 system_pods.go:61] "metrics-server-b955d9d8-9qrh2" [37627389-19ca-44a3-b5a8-a0aff226824d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:33:51.917967   15168 system_pods.go:61] "storage-provisioner" [ab053075-62ac-43ac-b212-ba5bfef0faef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 04:33:51.917973   15168 system_pods.go:74] duration metric: took 10.05572ms to wait for pod list to return data ...
	I0601 04:33:51.917981   15168 default_sa.go:34] waiting for default service account to be created ...
	I0601 04:33:51.922534   15168 default_sa.go:45] found service account: "default"
	I0601 04:33:51.922552   15168 default_sa.go:55] duration metric: took 4.563505ms for default service account to be created ...
	I0601 04:33:51.922563   15168 kubeadm.go:572] duration metric: took 692.673499ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0601 04:33:51.922586   15168 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:33:51.926230   15168 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:33:51.926242   15168 node_conditions.go:123] node cpu capacity is 6
	I0601 04:33:51.926250   15168 node_conditions.go:105] duration metric: took 3.660356ms to run NodePressure ...
	I0601 04:33:51.926258   15168 start.go:213] waiting for startup goroutines ...
	I0601 04:33:51.969132   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:51.996898   15168 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 04:33:51.996917   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 04:33:52.004032   15168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:33:52.010449   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 04:33:52.010463   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 04:33:52.021734   15168 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 04:33:52.021755   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 04:33:52.080927   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 04:33:52.080944   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 04:33:52.096221   15168 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:33:52.096238   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 04:33:52.103028   15168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:33:52.108213   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 04:33:52.108227   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 04:33:52.120168   15168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:33:52.187777   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 04:33:52.187789   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 04:33:52.218467   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 04:33:52.218483   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 04:33:52.297544   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 04:33:52.297560   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 04:33:52.387473   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 04:33:52.387493   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 04:33:52.414422   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 04:33:52.414440   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 04:33:52.436622   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:33:52.436636   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 04:33:52.497411   15168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:33:53.406044   15168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.401973256s)
	I0601 04:33:53.406130   15168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.303070551s)
	I0601 04:33:53.423155   15168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.302944456s)
	I0601 04:33:53.423183   15168 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220601043243-2342"
	I0601 04:33:53.599892   15168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.102440107s)
	I0601 04:33:53.662455   15168 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 04:33:53.699587   15168 addons.go:417] enableAddons completed in 2.469643476s
	I0601 04:33:53.740112   15168 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 04:33:53.763371   15168 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220601043243-2342" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:33:37 UTC, end at Wed 2022-06-01 11:34:31 UTC. --
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.467327530Z" level=info msg="Removing stale sandbox 7c5c096f42c5f70b5ee2cc94bcd455f82f5b28c58c228d5f65170b47630a3f7e (23cd8f73e35da1041fd074db486ba28bfcdfde16fc55593b1cc7b92fa352402e)"
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.469106521Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 87258a15c9015cc10b678da1e5ec241c6e2760286046982e0533ea12f21280f1 0cfe5c7feeee97daa4356000f5c14d108f61a0f65f18a43fc803cab062fcb634], retrying...."
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.561173079Z" level=info msg="Removing stale sandbox dc1311045bd7f1af3c5ebdb362fe0e0b2edc478ad7601fff2647eb5b9e9341e4 (e696f119d3b9a8a08e277cfecd368f61532e6b6ea3fcac661d58ea30940667d0)"
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.562906758Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 87258a15c9015cc10b678da1e5ec241c6e2760286046982e0533ea12f21280f1 333789a133625a17cf7a539901f73928c051d17d4ff5c987d0310bbad8353a30], retrying...."
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.662838266Z" level=info msg="Removing stale sandbox 139dc2a563b57a8338b755831726dd1e04ae64f4db13190acef96f32e4da2e12 (5112b8a0b836e92f8416c325a8882e3fb7de756d8034f10f3beaa75e53988f93)"
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.663920422Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 87258a15c9015cc10b678da1e5ec241c6e2760286046982e0533ea12f21280f1 fdf1098b81d22302ca1442c7daea3b7b859677412219dc191326331268ea35d2], retrying...."
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.762371165Z" level=info msg="Removing stale sandbox 27184ae9d7f76ff08086d96d23464486aba780a7c545fc9b1ecc08c40858aaae (6c3cf6adcbfe18da9462b383e74932eb949be1ac23375179eab48fa5923cc649)"
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.763680839Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 87258a15c9015cc10b678da1e5ec241c6e2760286046982e0533ea12f21280f1 3666cb0eac286bf7fceb1ea26b0cc5f330e0f73160972cf3851d5db6d6738b49], retrying...."
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.787449725Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.828599758Z" level=info msg="Loading containers: done."
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.837489971Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.837628216Z" level=info msg="Daemon has completed initialization"
	Jun 01 11:33:37 newest-cni-20220601043243-2342 systemd[1]: Started Docker Application Container Engine.
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.864113041Z" level=info msg="API listen on [::]:2376"
	Jun 01 11:33:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:37.868783570Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 01 11:33:50 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:50.905269974Z" level=info msg="ignoring event" container=1378f71e5a5cff0e0d6fd5ccedb16a11b1caa07a8d8eed3ee2a1c9acc254012c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:33:51 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:51.880056503Z" level=info msg="ignoring event" container=c4eb95725f2bac9c0d102cc7e49adbf9bc0b631ba73e63826e4e63e8f52106a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:33:52 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:52.304286509Z" level=info msg="ignoring event" container=83f10413022ebb0cebeffadbc872a86a3fb255b52aa3f01069475a368be357ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:33:53 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:53.131666469Z" level=info msg="ignoring event" container=385d5fbf4aa2d88c362767c14eb30b2dcf8f88dea33ddad875a8c000572ec21e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:33:53 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:53.204687409Z" level=info msg="ignoring event" container=67f09f21ea542ef9128efe11161e225c52c0035f538d20c23dac884dbb16e46b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:33:54 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:54.065100759Z" level=info msg="ignoring event" container=04abef08f6c55b6198c5cdf01fe458ce6553a8bd49a47869da4fc17093f64180 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:33:54 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:54.096951128Z" level=info msg="ignoring event" container=6320de88ab7ddcf369b3cca05861794f3c8a7bb27f693f290546bc4900757e42 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:33:54 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:54.918483094Z" level=info msg="ignoring event" container=6f903ec59dcd7801b90689f2fe546920ee0676673c4ca3e91eeb5f4d36c81caa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:33:54 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:54.933818051Z" level=info msg="ignoring event" container=09516534a86ed4fb769a9c14c2ce08c4be1844af19292fc7db2b73b3e2efa61d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:28 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:28.583995632Z" level=info msg="ignoring event" container=1c722764e34ec47232932eff7431c538c08d84f7ff6d6000ec390b51a1541a1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	1c722764e34ec       6e38f40d628db       40 seconds ago       Exited              storage-provisioner       1                   3a2241b2b8da0
	1aefc60a1e6d6       4c03754524064       40 seconds ago       Running             kube-proxy                1                   a0f21af05d39d
	7fcc1256c3e3e       25f8c7f3da61c       46 seconds ago       Running             etcd                      1                   99956a7b8567f
	29ba40796a3f6       df7b72818ad2e       46 seconds ago       Running             kube-controller-manager   1                   f2aee1dffb5f5
	06ffd68bc7fcb       8fa62c12256df       46 seconds ago       Running             kube-apiserver            1                   3b870b79e79d6
	3d239524ff600       595f327f224a4       46 seconds ago       Running             kube-scheduler            1                   bfbbd9c597142
	dbd0a440cdbad       4c03754524064       About a minute ago   Exited              kube-proxy                0                   e696f119d3b9a
	167c5c91c4994       595f327f224a4       About a minute ago   Exited              kube-scheduler            0                   6c3cf6adcbfe1
	5eefec557a4a2       8fa62c12256df       About a minute ago   Exited              kube-apiserver            0                   23cd8f73e35da
	f583206f062ef       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   1a71bae23aeb8
	5c2a8150bc256       df7b72818ad2e       About a minute ago   Exited              kube-controller-manager   0                   a101a38066516
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220601043243-2342
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220601043243-2342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=newest-cni-20220601043243-2342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T04_33_06_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:33:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220601043243-2342
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:34:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:34:29 +0000   Wed, 01 Jun 2022 11:33:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:34:29 +0000   Wed, 01 Jun 2022 11:33:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:34:29 +0000   Wed, 01 Jun 2022 11:33:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 11:34:29 +0000   Wed, 01 Jun 2022 11:34:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    newest-cni-20220601043243-2342
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                4b8a245a-d54b-4f27-a340-95d267bdc6d0
	  Boot ID:                    f65ff030-0ce1-451f-b056-a175624cc17c
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-blq67                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     72s
	  kube-system                 etcd-newest-cni-20220601043243-2342                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         84s
	  kube-system                 kube-apiserver-newest-cni-20220601043243-2342             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-controller-manager-newest-cni-20220601043243-2342    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-proxy-br6ph                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-scheduler-newest-cni-20220601043243-2342             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 metrics-server-b955d9d8-9qrh2                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-x7xtc                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-762k5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 71s                kube-proxy  
	  Normal  Starting                 40s                kube-proxy  
	  Normal  NodeHasSufficientPID     92s (x4 over 92s)  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    92s (x5 over 92s)  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  92s (x5 over 92s)  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientMemory
	  Normal  Starting                 85s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  85s                kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s                kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s                kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                74s                kubelet     Node newest-cni-20220601043243-2342 status is now: NodeReady
	  Normal  Starting                 46s                kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x7 over 46s)  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  46s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  46s (x8 over 46s)  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2s                 kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s                 kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s                 kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2s                 kubelet     Node newest-cni-20220601043243-2342 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2s                 kubelet     Node newest-cni-20220601043243-2342 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [7fcc1256c3e3] <==
	* {"level":"info","ts":"2022-06-01T11:33:46.218Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-01T11:33:46.218Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:33:46.218Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:33:46.219Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:33:46.220Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:33:46.220Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:33:46.220Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:33:46.220Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-01T11:33:48.008Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:newest-cni-20220601043243-2342 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:33:48.008Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:33:48.008Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:33:48.008Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:33:48.008Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:33:48.009Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:33:48.009Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:33:51.672Z","caller":"traceutil/trace.go:171","msg":"trace[2010815828] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:565; }","duration":"187.081734ms","start":"2022-06-01T11:33:51.485Z","end":"2022-06-01T11:33:51.672Z","steps":["trace[2010815828] 'read index received'  (duration: 187.072759ms)","trace[2010815828] 'applied index is now lower than readState.Index'  (duration: 7.983µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:33:51.675Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"163.525805ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:758"}
	{"level":"info","ts":"2022-06-01T11:33:51.675Z","caller":"traceutil/trace.go:171","msg":"trace[2117314095] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:536; }","duration":"163.619537ms","start":"2022-06-01T11:33:51.512Z","end":"2022-06-01T11:33:51.675Z","steps":["trace[2117314095] 'agreement among raft nodes before linearized reading'  (duration: 163.449501ms)"],"step_count":1}
	
	* 
	* ==> etcd [f583206f062e] <==
	* {"level":"info","ts":"2022-06-01T11:33:01.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:33:01.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:33:01.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:01.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:01.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:01.149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:01.149Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:33:01.150Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:33:01.150Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:33:01.150Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:33:01.150Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:newest-cni-20220601043243-2342 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:33:01.150Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:33:01.151Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:33:01.151Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:33:01.152Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:33:01.154Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:33:01.154Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:33:23.235Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-01T11:33:23.236Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"newest-cni-20220601043243-2342","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/06/01 11:33:23 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/01 11:33:23 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-01T11:33:23.319Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-06-01T11:33:23.320Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:33:23.321Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:33:23.321Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"newest-cni-20220601043243-2342","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  11:34:32 up  1:15,  0 users,  load average: 1.45, 0.99, 0.86
	Linux newest-cni-20220601043243-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [06ffd68bc7fc] <==
	* I0601 11:33:49.591742       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 11:33:49.592441       1 cache.go:39] Caches are synced for autoregister controller
	I0601 11:33:49.592644       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 11:33:49.593431       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 11:33:49.593434       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 11:33:50.436974       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 11:33:50.437028       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 11:33:50.442687       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	W0601 11:33:50.622692       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:33:50.622747       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:33:50.622753       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:33:51.107725       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:33:51.116355       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:33:51.144995       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:33:51.194893       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:33:51.200342       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:33:51.484754       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:33:53.309518       1 controller.go:611] quota admission added evaluator for: namespaces
	I0601 11:33:53.532410       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.103.252.105]
	I0601 11:33:53.589843       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.106.137.196]
	I0601 11:34:28.981315       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:34:28.981315       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:34:28.989391       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:34:29.015427       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [5eefec557a4a] <==
	* W0601 11:33:24.238569       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238642       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238662       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238682       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238713       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238691       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238745       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238746       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238756       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238867       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238875       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238905       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238934       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239024       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239381       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239457       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239488       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239392       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239421       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239560       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239733       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239746       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239871       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.240105       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.241861       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [29ba40796a3f] <==
	* I0601 11:34:28.995352       1 shared_informer.go:247] Caches are synced for job 
	I0601 11:34:28.995777       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0601 11:34:29.001239       1 shared_informer.go:247] Caches are synced for deployment 
	I0601 11:34:29.001279       1 shared_informer.go:247] Caches are synced for disruption 
	I0601 11:34:29.001288       1 disruption.go:371] Sending events to api server.
	I0601 11:34:29.002787       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:34:29.006856       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 11:34:29.012890       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0601 11:34:29.012947       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 11:34:29.013169       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 11:34:29.013276       1 shared_informer.go:247] Caches are synced for taint 
	I0601 11:34:29.013355       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0601 11:34:29.013417       1 node_lifecycle_controller.go:1012] Missing timestamp for Node newest-cni-20220601043243-2342. Assuming now as a timestamp.
	I0601 11:34:29.013462       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0601 11:34:29.013638       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0601 11:34:29.013819       1 event.go:294] "Event occurred" object="newest-cni-20220601043243-2342" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220601043243-2342 event: Registered Node newest-cni-20220601043243-2342 in Controller"
	I0601 11:34:29.019269       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	I0601 11:34:29.021890       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0601 11:34:29.070668       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:34:29.078538       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-762k5"
	I0601 11:34:29.082338       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-x7xtc"
	I0601 11:34:29.092332       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:34:29.494213       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:34:29.573463       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:34:29.573480       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [5c2a8150bc25] <==
	* I0601 11:33:18.486094       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:33:18.486176       1 shared_informer.go:247] Caches are synced for TTL 
	I0601 11:33:18.486189       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0601 11:33:18.486198       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0601 11:33:18.488268       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 11:33:18.514332       1 shared_informer.go:247] Caches are synced for disruption 
	I0601 11:33:18.514353       1 disruption.go:371] Sending events to api server.
	I0601 11:33:18.535879       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 11:33:18.538627       1 shared_informer.go:247] Caches are synced for cronjob 
	I0601 11:33:18.591321       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:33:18.688423       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:33:18.737799       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 11:33:19.105173       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:33:19.140236       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 11:33:19.185960       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:33:19.186005       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:33:19.391665       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-br6ph"
	I0601 11:33:19.490832       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-blq67"
	I0601 11:33:19.496846       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-svsmk"
	I0601 11:33:19.679027       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:33:19.682565       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-svsmk"
	I0601 11:33:22.512334       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0601 11:33:22.516902       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0601 11:33:22.521522       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0601 11:33:22.528273       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-9qrh2"
	
	* 
	* ==> kube-proxy [1aefc60a1e6d] <==
	* I0601 11:33:51.384851       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:33:51.384911       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:33:51.384934       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:33:51.427297       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:33:51.427365       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:33:51.427374       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:33:51.427387       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:33:51.428576       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:33:51.429073       1 config.go:317] "Starting service config controller"
	I0601 11:33:51.429111       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:33:51.431360       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:33:51.431391       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:33:51.431400       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:33:51.529762       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [dbd0a440cdba] <==
	* I0601 11:33:20.322155       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:33:20.322203       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:33:20.322285       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:33:20.413521       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:33:20.413543       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:33:20.413548       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:33:20.413559       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:33:20.414014       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:33:20.416197       1 config.go:317] "Starting service config controller"
	I0601 11:33:20.416276       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:33:20.416370       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:33:20.416378       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:33:20.517085       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:33:20.517123       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [167c5c91c499] <==
	* E0601 11:33:03.643574       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:33:03.643602       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:33:03.643628       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:33:03.643751       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:33:03.643780       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:33:03.643830       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:33:03.643838       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:33:03.644727       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:33:03.644758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:33:03.646310       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:33:03.646367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:33:04.530495       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:33:04.530517       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:33:04.594745       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:33:04.594783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:33:04.624004       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:33:04.624041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:33:04.662780       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:33:04.662826       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:33:04.773035       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:33:04.773075       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:33:07.840123       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0601 11:33:23.228989       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 11:33:23.229762       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0601 11:33:23.229981       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kube-scheduler [3d239524ff60] <==
	* W0601 11:33:46.221296       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0601 11:33:46.949448       1 serving.go:348] Generated self-signed cert in-memory
	W0601 11:33:49.469499       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0601 11:33:49.469539       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 11:33:49.469546       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0601 11:33:49.469550       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0601 11:33:49.511272       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0601 11:33:49.513487       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0601 11:33:49.513557       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 11:33:49.513564       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0601 11:33:49.513601       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0601 11:33:49.616068       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:33:37 UTC, end at Wed 2022-06-01 11:34:34 UTC. --
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.705291    3814 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\" network for pod \"coredns-64897985d-blq67\": networkPlugin cni failed to set up pod \"coredns-64897985d-blq67_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\" network for pod \"coredns-64897985d-blq67\": networkPlugin cni failed to teardown pod \"coredns-64897985d-blq67_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-52f08543e41337faa9356a6b -m comment --comment name: \"crio\" id: \"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\" --wait]: exit status 2: iptables v1.8.4 (legacy):
Couldn't load target `CNI-52f08543e41337faa9356a6b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.705369    3814 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\" network for pod \"coredns-64897985d-blq67\": networkPlugin cni failed to set up pod \"coredns-64897985d-blq67_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\" network for pod \"coredns-64897985d-blq67\": networkPlugin cni failed to teardown pod \"coredns-64897985d-blq67_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-52f08543e41337faa9356a6b -m comment --comment name: \"crio\" id: \"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-52f08543e41337faa9356a6b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-blq67"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.705396    3814 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\" network for pod \"coredns-64897985d-blq67\": networkPlugin cni failed to set up pod \"coredns-64897985d-blq67_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\" network for pod \"coredns-64897985d-blq67\": networkPlugin cni failed to teardown pod \"coredns-64897985d-blq67_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-52f08543e41337faa9356a6b -m comment --comment name: \"crio\" id: \"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-52f08543e41337faa9356a6b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-blq67"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.705459    3814 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-blq67_kube-system(ded91fd2-d2c9-4420-9f11-7eab7d7a70cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-blq67_kube-system(ded91fd2-d2c9-4420-9f11-7eab7d7a70cf)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\\\" network for pod \\\"coredns-64897985d-blq67\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-blq67_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\\\" network for pod \\\"coredns-64897985d-blq67\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-blq67_kube-syste
m\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-52f08543e41337faa9356a6b -m comment --comment name: \\\"crio\\\" id: \\\"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-52f08543e41337faa9356a6b':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-blq67" podUID=ded91fd2-d2c9-4420-9f11-7eab7d7a70cf
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.705579    3814 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"5fb110a9ca2e4ca8a7158373c82e0f0a063e6aade8c106d75ea0f24ca8d5a467\" network for pod \"dashboard-metrics-scraper-56974995fc-x7xtc\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"5fb110a9ca2e4ca8a7158373c82e0f0a063e6aade8c106d75ea0f24ca8d5a467\" network for pod \"dashboard-metrics-scraper-56974995fc-x7xtc\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-e2463ed390e5a711e28b0220 -m comment --comment name: \"crio\" id: \"5fb110a9ca2e4ca8a715837
3c82e0f0a063e6aade8c106d75ea0f24ca8d5a467\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e2463ed390e5a711e28b0220':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.705645    3814 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"5fb110a9ca2e4ca8a7158373c82e0f0a063e6aade8c106d75ea0f24ca8d5a467\" network for pod \"dashboard-metrics-scraper-56974995fc-x7xtc\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"5fb110a9ca2e4ca8a7158373c82e0f0a063e6aade8c106d75ea0f24ca8d5a467\" network for pod \"dashboard-metrics-scraper-56974995fc-x7xtc\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-e2463ed390e5a711e28b0220 -m comment --comment name: \"crio\" id: \"5fb110a9ca2e4ca8a7158373c82e
0f0a063e6aade8c106d75ea0f24ca8d5a467\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e2463ed390e5a711e28b0220':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-x7xtc"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.705677    3814 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"5fb110a9ca2e4ca8a7158373c82e0f0a063e6aade8c106d75ea0f24ca8d5a467\" network for pod \"dashboard-metrics-scraper-56974995fc-x7xtc\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"5fb110a9ca2e4ca8a7158373c82e0f0a063e6aade8c106d75ea0f24ca8d5a467\" network for pod \"dashboard-metrics-scraper-56974995fc-x7xtc\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-e2463ed390e5a711e28b0220 -m comment --comment name: \"crio\" id: \"5fb110a9ca2e4ca8a7158373c82e
0f0a063e6aade8c106d75ea0f24ca8d5a467\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e2463ed390e5a711e28b0220':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-x7xtc"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.705751    3814 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard(6c5e0a54-21da-429a-af54-5f8116aadef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard(6c5e0a54-21da-429a-af54-5f8116aadef1)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"5fb110a9ca2e4ca8a7158373c82e0f0a063e6aade8c106d75ea0f24ca8d5a467\\\" network for pod \\\"dashboard-metrics-scraper-56974995fc-x7xtc\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"5fb110a9ca2e4ca8a7158373c82e0f0a063e6aade8c106d75ea0f24ca8d5a467\\\" network for pod \\\"dashb
oard-metrics-scraper-56974995fc-x7xtc\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-e2463ed390e5a711e28b0220 -m comment --comment name: \\\"crio\\\" id: \\\"5fb110a9ca2e4ca8a7158373c82e0f0a063e6aade8c106d75ea0f24ca8d5a467\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e2463ed390e5a711e28b0220':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-x7xtc" podUID=6c5e0a54-21da-429a-af54-5f8116aadef1
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.706537    3814 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bfb1b37a575a754ef5\" network for pod \"kubernetes-dashboard-8469778f77-762k5\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-8469778f77-762k5_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bfb1b37a575a754ef5\" network for pod \"kubernetes-dashboard-8469778f77-762k5\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-8469778f77-762k5_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-3a93430404c12834bb5cda5b -m comment --comment name: \"crio\" id: \"f990d53ad715b0efbfe843dce04344d1db4a6f5f016
d79bfb1b37a575a754ef5\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3a93430404c12834bb5cda5b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.706595    3814 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bfb1b37a575a754ef5\" network for pod \"kubernetes-dashboard-8469778f77-762k5\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-8469778f77-762k5_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bfb1b37a575a754ef5\" network for pod \"kubernetes-dashboard-8469778f77-762k5\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-8469778f77-762k5_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-3a93430404c12834bb5cda5b -m comment --comment name: \"crio\" id: \"f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bf
b1b37a575a754ef5\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3a93430404c12834bb5cda5b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-762k5"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.706621    3814 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bfb1b37a575a754ef5\" network for pod \"kubernetes-dashboard-8469778f77-762k5\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-8469778f77-762k5_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bfb1b37a575a754ef5\" network for pod \"kubernetes-dashboard-8469778f77-762k5\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-8469778f77-762k5_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-3a93430404c12834bb5cda5b -m comment --comment name: \"crio\" id: \"f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bf
b1b37a575a754ef5\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3a93430404c12834bb5cda5b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-762k5"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.706693    3814 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-8469778f77-762k5_kubernetes-dashboard(981c6e31-56cb-4cab-9115-888bd56ddc02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-8469778f77-762k5_kubernetes-dashboard(981c6e31-56cb-4cab-9115-888bd56ddc02)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bfb1b37a575a754ef5\\\" network for pod \\\"kubernetes-dashboard-8469778f77-762k5\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-8469778f77-762k5_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bfb1b37a575a754ef5\\\" network for pod \\\"kubernetes-dashboard-8469
778f77-762k5\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-8469778f77-762k5_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-3a93430404c12834bb5cda5b -m comment --comment name: \\\"crio\\\" id: \\\"f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bfb1b37a575a754ef5\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3a93430404c12834bb5cda5b':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-762k5" podUID=981c6e31-56cb-4cab-9115-888bd56ddc02
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.748249    3814 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/metrics-server-b955d9d8-9qrh2" podSandboxID={Type:docker ID:09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9} podNetnsPath="/proc/5147/ns/net" networkType="bridge" networkName="crio"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:33.777362    3814 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.20 -j CNI-ea620b7d648e2d65ce343b24 -m comment --comment name: \"crio\" id: \"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-ea620b7d648e2d65ce343b24':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/metrics-server-b955d9d8-9qrh2" podSandboxID={Type:docker ID:09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9} podNetnsPath="/proc/5147/ns/net" networkType="bridge" networkName="crio"
	Jun 01 11:34:34 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:34.161650    3814 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\" network for pod \"metrics-server-b955d9d8-9qrh2\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-9qrh2_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\" network for pod \"metrics-server-b955d9d8-9qrh2\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-9qrh2_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.20 -j CNI-ea620b7d648e2d65ce343b24 -m comment --comment name: \"crio\" id: \"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\" --wait]: exit status 2: ip
tables v1.8.4 (legacy): Couldn't load target `CNI-ea620b7d648e2d65ce343b24':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 11:34:34 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:34.161697    3814 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\" network for pod \"metrics-server-b955d9d8-9qrh2\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-9qrh2_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\" network for pod \"metrics-server-b955d9d8-9qrh2\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-9qrh2_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.20 -j CNI-ea620b7d648e2d65ce343b24 -m comment --comment name: \"crio\" id: \"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-ea620b7d648e2d65ce343b24':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-9qrh2"
	Jun 01 11:34:34 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:34.161737    3814 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\" network for pod \"metrics-server-b955d9d8-9qrh2\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-9qrh2_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\" network for pod \"metrics-server-b955d9d8-9qrh2\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-9qrh2_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.20 -j CNI-ea620b7d648e2d65ce343b24 -m comment --comment name: \"crio\" id: \"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-ea620b7d648e2d65ce343b24':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-9qrh2"
	Jun 01 11:34:34 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:34.161792    3814 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-b955d9d8-9qrh2_kube-system(37627389-19ca-44a3-b5a8-a0aff226824d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-b955d9d8-9qrh2_kube-system(37627389-19ca-44a3-b5a8-a0aff226824d)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\\\" network for pod \\\"metrics-server-b955d9d8-9qrh2\\\": networkPlugin cni failed to set up pod \\\"metrics-server-b955d9d8-9qrh2_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\\\" network for pod \\\"metrics-server-b955d9d8-9qrh2\\\": networkPlugin cni failed to teardown pod \\\"metr
ics-server-b955d9d8-9qrh2_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.20 -j CNI-ea620b7d648e2d65ce343b24 -m comment --comment name: \\\"crio\\\" id: \\\"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-ea620b7d648e2d65ce343b24':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-b955d9d8-9qrh2" podUID=37627389-19ca-44a3-b5a8-a0aff226824d
	Jun 01 11:34:34 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:34.410544    3814 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"metrics-server-b955d9d8-9qrh2_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\""
	Jun 01 11:34:34 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:34.412449    3814 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="849caf6623056263941b8dffb6c02926eadb7bf863715e029f8e44f0aa46dec5"
	Jun 01 11:34:34 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:34.412487    3814 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9"
	Jun 01 11:34:34 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:34.414048    3814 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d\""
	Jun 01 11:34:34 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:34.414454    3814 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9\""
	Jun 01 11:34:34 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:34.416632    3814 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"5fb110a9ca2e4ca8a7158373c82e0f0a063e6aade8c106d75ea0f24ca8d5a467\""
	Jun 01 11:34:34 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:34.417037    3814 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bfb1b37a575a754ef5\""
	
	* 
	* ==> storage-provisioner [1c722764e34e] <==
	* I0601 11:33:51.302463       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0601 11:34:28.409599       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220601043243-2342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context newest-cni-20220601043243-2342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.306835255s)
helpers_test.go:270: non-running pods: coredns-64897985d-blq67 metrics-server-b955d9d8-9qrh2 dashboard-metrics-scraper-56974995fc-x7xtc kubernetes-dashboard-8469778f77-762k5
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220601043243-2342 describe pod coredns-64897985d-blq67 metrics-server-b955d9d8-9qrh2 dashboard-metrics-scraper-56974995fc-x7xtc kubernetes-dashboard-8469778f77-762k5
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220601043243-2342 describe pod coredns-64897985d-blq67 metrics-server-b955d9d8-9qrh2 dashboard-metrics-scraper-56974995fc-x7xtc kubernetes-dashboard-8469778f77-762k5: exit status 1 (230.170643ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-blq67" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-9qrh2" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-x7xtc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-762k5" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220601043243-2342 describe pod coredns-64897985d-blq67 metrics-server-b955d9d8-9qrh2 dashboard-metrics-scraper-56974995fc-x7xtc kubernetes-dashboard-8469778f77-762k5: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220601043243-2342
helpers_test.go:235: (dbg) docker inspect newest-cni-20220601043243-2342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "39500d2a5407bc9d09353766f117699399118c7230ecba55053f4104f94ccc6e",
	        "Created": "2022-06-01T11:32:49.891079452Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272435,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T11:33:36.97014231Z",
	            "FinishedAt": "2022-06-01T11:33:35.066311713Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/39500d2a5407bc9d09353766f117699399118c7230ecba55053f4104f94ccc6e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/39500d2a5407bc9d09353766f117699399118c7230ecba55053f4104f94ccc6e/hostname",
	        "HostsPath": "/var/lib/docker/containers/39500d2a5407bc9d09353766f117699399118c7230ecba55053f4104f94ccc6e/hosts",
	        "LogPath": "/var/lib/docker/containers/39500d2a5407bc9d09353766f117699399118c7230ecba55053f4104f94ccc6e/39500d2a5407bc9d09353766f117699399118c7230ecba55053f4104f94ccc6e-json.log",
	        "Name": "/newest-cni-20220601043243-2342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220601043243-2342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220601043243-2342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0ec8cff8d5390a0578938c22f8ef6ca5ae52f76ef9dab0ef9c221a9afc25ab5-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0ec8cff8d5390a0578938c22f8ef6ca5ae52f76ef9dab0ef9c221a9afc25ab5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0ec8cff8d5390a0578938c22f8ef6ca5ae52f76ef9dab0ef9c221a9afc25ab5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0ec8cff8d5390a0578938c22f8ef6ca5ae52f76ef9dab0ef9c221a9afc25ab5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220601043243-2342",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220601043243-2342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220601043243-2342",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220601043243-2342",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220601043243-2342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fc47614dc08aaf004218d5d4b46fde91f97191d53790fdce2ed0a1daaacecf0c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55526"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55528"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55529"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fc47614dc08a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220601043243-2342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "39500d2a5407",
	                        "newest-cni-20220601043243-2342"
	                    ],
	                    "NetworkID": "541a970f52a93c1bca2cd182e71499f265458bdd4da587edb656b740fd0f196d",
	                    "EndpointID": "3cec8b3a5976683c2d928b2c6302bd336080ba19858a2e21c5a6400a301cd916",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220601043243-2342 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220601043243-2342 logs -n 25: (5.41207498s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | no-preload-20220601041659-2342                             | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220601041659-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:24 PDT |
	|         | no-preload-20220601041659-2342                             |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:24 PDT | 01 Jun 22 04:25 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:25 PDT | 01 Jun 22 04:25 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:25 PDT | 01 Jun 22 04:26 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:26 PDT | 01 Jun 22 04:26 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:26 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:31 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:31 PDT | 01 Jun 22 04:31 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | old-k8s-version-20220601040844-2342                        | old-k8s-version-20220601040844-2342            | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601042455-2342             | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601042455-2342             | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220601042455-2342 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:32 PDT |
	|         | default-k8s-different-port-20220601042455-2342             |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601043243-2342 --memory=2200            | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:32 PDT | 01 Jun 22 04:33 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220601043243-2342 --memory=2200            | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:33 PDT | 01 Jun 22 04:33 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:34 PDT | 01 Jun 22 04:34 PDT |
	|         | newest-cni-20220601043243-2342                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220601043243-2342                             | newest-cni-20220601043243-2342                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 04:34 PDT | 01 Jun 22 04:34 PDT |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 04:33:35
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 04:33:35.720618   15168 out.go:296] Setting OutFile to fd 1 ...
	I0601 04:33:35.720870   15168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:33:35.720880   15168 out.go:309] Setting ErrFile to fd 2...
	I0601 04:33:35.720890   15168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 04:33:35.721020   15168 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 04:33:35.721322   15168 out.go:303] Setting JSON to false
	I0601 04:33:35.737205   15168 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5585,"bootTime":1654077630,"procs":365,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 04:33:35.737334   15168 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 04:33:35.759559   15168 out.go:177] * [newest-cni-20220601043243-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 04:33:35.781521   15168 notify.go:193] Checking for updates...
	I0601 04:33:35.803476   15168 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 04:33:35.825398   15168 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:33:35.847711   15168 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 04:33:35.876102   15168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 04:33:35.896119   15168 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 04:33:35.917997   15168 config.go:178] Loaded profile config "newest-cni-20220601043243-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:33:35.918629   15168 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 04:33:35.991115   15168 docker.go:137] docker version: linux-20.10.14
	I0601 04:33:35.991240   15168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:33:36.117563   15168 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:33:36.062705506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:33:36.139492   15168 out.go:177] * Using the docker driver based on existing profile
	I0601 04:33:36.161209   15168 start.go:284] selected driver: docker
	I0601 04:33:36.161237   15168 start.go:806] validating driver "docker" against &{Name:newest-cni-20220601043243-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601043243-2342 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[a
piserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:33:36.161459   15168 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 04:33:36.164798   15168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 04:33:36.292407   15168 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 11:33:36.238121209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 04:33:36.292574   15168 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0601 04:33:36.292590   15168 cni.go:95] Creating CNI manager for ""
	I0601 04:33:36.292599   15168 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:33:36.292612   15168 start_flags.go:306] config:
	{Name:newest-cni-20220601043243-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601043243-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_
ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:33:36.314637   15168 out.go:177] * Starting control plane node newest-cni-20220601043243-2342 in cluster newest-cni-20220601043243-2342
	I0601 04:33:36.336250   15168 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 04:33:36.358074   15168 out.go:177] * Pulling base image ...
	I0601 04:33:36.400252   15168 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:33:36.400261   15168 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 04:33:36.400324   15168 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 04:33:36.400340   15168 cache.go:57] Caching tarball of preloaded images
	I0601 04:33:36.400507   15168 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 04:33:36.400537   15168 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 04:33:36.401383   15168 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/config.json ...
	I0601 04:33:36.468470   15168 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 04:33:36.468517   15168 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 04:33:36.468530   15168 cache.go:206] Successfully downloaded all kic artifacts
	I0601 04:33:36.468582   15168 start.go:352] acquiring machines lock for newest-cni-20220601043243-2342: {Name:mk1c220030e5dc7346d70b8e86adc86c2159451d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 04:33:36.468682   15168 start.go:356] acquired machines lock for "newest-cni-20220601043243-2342" in 63.546µs
	I0601 04:33:36.468703   15168 start.go:94] Skipping create...Using existing machine configuration
	I0601 04:33:36.468712   15168 fix.go:55] fixHost starting: 
	I0601 04:33:36.468936   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:36.538969   15168 fix.go:103] recreateIfNeeded on newest-cni-20220601043243-2342: state=Stopped err=<nil>
	W0601 04:33:36.539000   15168 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 04:33:36.560938   15168 out.go:177] * Restarting existing docker container for "newest-cni-20220601043243-2342" ...
	I0601 04:33:36.582691   15168 cli_runner.go:164] Run: docker start newest-cni-20220601043243-2342
	I0601 04:33:36.965186   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:37.040816   15168 kic.go:416] container "newest-cni-20220601043243-2342" state is running.
	I0601 04:33:37.041410   15168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601043243-2342
	I0601 04:33:37.123521   15168 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/config.json ...
	I0601 04:33:37.124035   15168 machine.go:88] provisioning docker machine ...
	I0601 04:33:37.124080   15168 ubuntu.go:169] provisioning hostname "newest-cni-20220601043243-2342"
	I0601 04:33:37.124139   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:37.206841   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:37.207075   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:37.207093   15168 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220601043243-2342 && echo "newest-cni-20220601043243-2342" | sudo tee /etc/hostname
	I0601 04:33:37.334284   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220601043243-2342
	
	I0601 04:33:37.334371   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:37.411022   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:37.411207   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:37.411228   15168 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220601043243-2342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220601043243-2342/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220601043243-2342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 04:33:37.529094   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:33:37.529112   15168 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/se
rver.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 04:33:37.529177   15168 ubuntu.go:177] setting up certificates
	I0601 04:33:37.529187   15168 provision.go:83] configureAuth start
	I0601 04:33:37.529248   15168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601043243-2342
	I0601 04:33:37.607518   15168 provision.go:138] copyHostCerts
	I0601 04:33:37.607606   15168 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 04:33:37.607615   15168 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 04:33:37.607703   15168 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1078 bytes)
	I0601 04:33:37.607912   15168 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 04:33:37.607923   15168 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 04:33:37.607982   15168 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 04:33:37.608149   15168 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 04:33:37.608155   15168 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 04:33:37.608215   15168 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1679 bytes)
	I0601 04:33:37.608334   15168 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220601043243-2342 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220601043243-2342]
	I0601 04:33:37.777252   15168 provision.go:172] copyRemoteCerts
	I0601 04:33:37.777323   15168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 04:33:37.777383   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:37.854290   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:37.942665   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0601 04:33:37.962614   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0601 04:33:37.983784   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 04:33:38.002393   15168 provision.go:86] duration metric: configureAuth took 473.185921ms
	I0601 04:33:38.002406   15168 ubuntu.go:193] setting minikube options for container-runtime
	I0601 04:33:38.002590   15168 config.go:178] Loaded profile config "newest-cni-20220601043243-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:33:38.002643   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.076962   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:38.077109   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:38.077122   15168 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 04:33:38.199559   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 04:33:38.199574   15168 ubuntu.go:71] root file system type: overlay
	I0601 04:33:38.199704   15168 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 04:33:38.199770   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.271586   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:38.271747   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:38.271799   15168 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 04:33:38.400393   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 04:33:38.400485   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.473348   15168 main.go:134] libmachine: Using SSH client type: native
	I0601 04:33:38.473500   15168 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55530 <nil> <nil>}
	I0601 04:33:38.473515   15168 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 04:33:38.599343   15168 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 04:33:38.599357   15168 machine.go:91] provisioned docker machine in 1.475295788s
	I0601 04:33:38.599371   15168 start.go:306] post-start starting for "newest-cni-20220601043243-2342" (driver="docker")
	I0601 04:33:38.599376   15168 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 04:33:38.599429   15168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 04:33:38.599473   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.672100   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:38.756558   15168 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 04:33:38.760333   15168 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 04:33:38.760347   15168 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 04:33:38.760354   15168 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 04:33:38.760358   15168 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 04:33:38.760366   15168 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 04:33:38.760478   15168 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 04:33:38.760619   15168 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem -> 23422.pem in /etc/ssl/certs
	I0601 04:33:38.760771   15168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 04:33:38.767743   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:33:38.784935   15168 start.go:309] post-start completed in 185.552686ms
	I0601 04:33:38.785046   15168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 04:33:38.785139   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:38.856795   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:38.942547   15168 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 04:33:38.946912   15168 fix.go:57] fixHost completed within 2.478168311s
	I0601 04:33:38.946922   15168 start.go:81] releasing machines lock for "newest-cni-20220601043243-2342", held for 2.478201289s
	I0601 04:33:38.946988   15168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601043243-2342
	I0601 04:33:39.020056   15168 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 04:33:39.020057   15168 ssh_runner.go:195] Run: systemctl --version
	I0601 04:33:39.020128   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:39.020136   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:39.099694   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:39.102563   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:39.184807   15168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 04:33:39.318352   15168 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:33:39.327971   15168 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 04:33:39.328023   15168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 04:33:39.337289   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 04:33:39.350192   15168 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 04:33:39.421583   15168 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 04:33:39.493616   15168 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 04:33:39.505293   15168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 04:33:39.575468   15168 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 04:33:39.585698   15168 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:33:39.621072   15168 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 04:33:39.702871   15168 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 04:33:39.702997   15168 cli_runner.go:164] Run: docker exec -t newest-cni-20220601043243-2342 dig +short host.docker.internal
	I0601 04:33:39.840169   15168 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 04:33:39.840253   15168 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 04:33:39.844483   15168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:33:39.855140   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:39.950021   15168 out.go:177]   - kubelet.network-plugin=cni
	I0601 04:33:39.971828   15168 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0601 04:33:39.993668   15168 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 04:33:39.993799   15168 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:33:40.025393   15168 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 04:33:40.025408   15168 docker.go:541] Images already preloaded, skipping extraction
	I0601 04:33:40.025484   15168 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 04:33:40.055373   15168 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 04:33:40.055395   15168 cache_images.go:84] Images are preloaded, skipping loading
	I0601 04:33:40.055472   15168 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 04:33:40.131178   15168 cni.go:95] Creating CNI manager for ""
	I0601 04:33:40.131190   15168 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:33:40.131210   15168 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0601 04:33:40.131223   15168 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220601043243-2342 NodeName:newest-cni-20220601043243-2342 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false]
Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 04:33:40.131366   15168 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220601043243-2342"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 04:33:40.131481   15168 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220601043243-2342 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601043243-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 04:33:40.131563   15168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 04:33:40.139314   15168 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 04:33:40.139369   15168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 04:33:40.146700   15168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0601 04:33:40.159242   15168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 04:33:40.173044   15168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2187 bytes)
	I0601 04:33:40.187746   15168 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 04:33:40.192011   15168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 04:33:40.202490   15168 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342 for IP: 192.168.49.2
	I0601 04:33:40.202649   15168 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 04:33:40.202730   15168 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 04:33:40.202816   15168 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/client.key
	I0601 04:33:40.202874   15168 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/apiserver.key.dd3b5fb2
	I0601 04:33:40.202934   15168 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/proxy-client.key
	I0601 04:33:40.203229   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem (1338 bytes)
	W0601 04:33:40.203287   15168 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342_empty.pem, impossibly tiny 0 bytes
	I0601 04:33:40.203317   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 04:33:40.203414   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1078 bytes)
	I0601 04:33:40.203445   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 04:33:40.203509   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1679 bytes)
	I0601 04:33:40.203625   15168 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem (1708 bytes)
	I0601 04:33:40.204248   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 04:33:40.222705   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 04:33:40.240287   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 04:33:40.261621   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601043243-2342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 04:33:40.279108   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 04:33:40.296730   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 04:33:40.313701   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 04:33:40.331521   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 04:33:40.349032   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 04:33:40.366996   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/2342.pem --> /usr/share/ca-certificates/2342.pem (1338 bytes)
	I0601 04:33:40.384175   15168 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/23422.pem --> /usr/share/ca-certificates/23422.pem (1708 bytes)
	I0601 04:33:40.402174   15168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 04:33:40.415032   15168 ssh_runner.go:195] Run: openssl version
	I0601 04:33:40.420329   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2342.pem && ln -fs /usr/share/ca-certificates/2342.pem /etc/ssl/certs/2342.pem"
	I0601 04:33:40.428113   15168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2342.pem
	I0601 04:33:40.432369   15168 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:24 /usr/share/ca-certificates/2342.pem
	I0601 04:33:40.432419   15168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2342.pem
	I0601 04:33:40.437889   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2342.pem /etc/ssl/certs/51391683.0"
	I0601 04:33:40.445633   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23422.pem && ln -fs /usr/share/ca-certificates/23422.pem /etc/ssl/certs/23422.pem"
	I0601 04:33:40.453598   15168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23422.pem
	I0601 04:33:40.457515   15168 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:24 /usr/share/ca-certificates/23422.pem
	I0601 04:33:40.457560   15168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23422.pem
	I0601 04:33:40.462878   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23422.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 04:33:40.470241   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 04:33:40.478182   15168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:33:40.482194   15168 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:33:40.482243   15168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 04:33:40.487883   15168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 04:33:40.495262   15168 kubeadm.go:395] StartCluster: {Name:newest-cni-20220601043243-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601043243-2342 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_r
unning:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 04:33:40.495361   15168 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:33:40.525182   15168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 04:33:40.532579   15168 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 04:33:40.532592   15168 kubeadm.go:626] restartCluster start
	I0601 04:33:40.532637   15168 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 04:33:40.539400   15168 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:40.539458   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:40.612589   15168 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220601043243-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:33:40.612763   15168 kubeconfig.go:127] "newest-cni-20220601043243-2342" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 04:33:40.613129   15168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:33:40.614481   15168 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 04:33:40.622686   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:40.622778   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:40.631362   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:40.833513   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:40.833669   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:40.844419   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.032599   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.032778   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.043272   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.233485   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.233633   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.245520   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.432039   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.432199   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.442259   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.632036   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.632182   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.643253   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:41.832061   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:41.832163   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:41.842628   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.032064   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.032280   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.043104   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.232141   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.232275   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.242877   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.431444   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.431558   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.442433   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.633665   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.633798   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.644236   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:42.833678   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:42.833769   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:42.844387   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.032052   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.032199   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.042815   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.231989   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.232051   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.241645   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.433549   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.433737   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.444478   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.633562   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.633776   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.644352   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.644361   15168 api_server.go:165] Checking apiserver status ...
	I0601 04:33:43.644405   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 04:33:43.652306   15168 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.652317   15168 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 04:33:43.652325   15168 kubeadm.go:1092] stopping kube-system containers ...
	I0601 04:33:43.652377   15168 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 04:33:43.683927   15168 docker.go:442] Stopping containers: [57d0d227b400 5b9b2242ae33 4c156d0546e3 5112b8a0b836 c3bb43e0ee6a bd5a43523de9 dbd0a440cdba 5263511ddfa5 e696f119d3b9 167c5c91c499 5eefec557a4a f583206f062e 5c2a8150bc25 6c3cf6adcbfe a101a3806651 23cd8f73e35d 1a71bae23aeb]
	I0601 04:33:43.684003   15168 ssh_runner.go:195] Run: docker stop 57d0d227b400 5b9b2242ae33 4c156d0546e3 5112b8a0b836 c3bb43e0ee6a bd5a43523de9 dbd0a440cdba 5263511ddfa5 e696f119d3b9 167c5c91c499 5eefec557a4a f583206f062e 5c2a8150bc25 6c3cf6adcbfe a101a3806651 23cd8f73e35d 1a71bae23aeb
	I0601 04:33:43.715513   15168 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 04:33:43.725843   15168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 04:33:43.733397   15168 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 11:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 11:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Jun  1 11:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 11:32 /etc/kubernetes/scheduler.conf
	
	I0601 04:33:43.733445   15168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 04:33:43.740565   15168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 04:33:43.747822   15168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 04:33:43.754907   15168 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.754970   15168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 04:33:43.762172   15168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 04:33:43.769712   15168 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 04:33:43.769754   15168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 04:33:43.776624   15168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 04:33:43.784322   15168 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 04:33:43.784331   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:43.833113   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:44.699448   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:44.829515   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:44.877837   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:44.927753   15168 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:33:44.927821   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:45.437108   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:45.937054   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:46.437405   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:46.487754   15168 api_server.go:71] duration metric: took 1.559980599s to wait for apiserver process to appear ...
	I0601 04:33:46.487776   15168 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:33:46.487791   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:49.450097   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 04:33:49.450114   15168 api_server.go:102] status: https://127.0.0.1:55529/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 04:33:49.951698   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:49.957645   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:33:49.957661   15168 api_server.go:102] status: https://127.0.0.1:55529/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:33:50.450344   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:50.455504   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 04:33:50.455516   15168 api_server.go:102] status: https://127.0.0.1:55529/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 04:33:50.950347   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:50.956284   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 200:
	ok
	I0601 04:33:50.963370   15168 api_server.go:140] control plane version: v1.23.6
	I0601 04:33:50.963383   15168 api_server.go:130] duration metric: took 4.475543754s to wait for apiserver health ...
	I0601 04:33:50.963389   15168 cni.go:95] Creating CNI manager for ""
	I0601 04:33:50.963393   15168 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 04:33:50.963402   15168 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:33:50.971086   15168 system_pods.go:59] 9 kube-system pods found
	I0601 04:33:50.971106   15168 system_pods.go:61] "coredns-64897985d-blq67" [ded91fd2-d2c9-4420-9f11-7eab7d7a70cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 04:33:50.971112   15168 system_pods.go:61] "coredns-64897985d-svsmk" [d6d0a06b-bb5a-461b-99d5-7b2fd6320947] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 04:33:50.971120   15168 system_pods.go:61] "etcd-newest-cni-20220601043243-2342" [5d33aabb-0215-438c-ad10-61ba084cc15f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 04:33:50.971129   15168 system_pods.go:61] "kube-apiserver-newest-cni-20220601043243-2342" [8c56d510-5f64-431d-8954-8c3cf47404a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 04:33:50.971135   15168 system_pods.go:61] "kube-controller-manager-newest-cni-20220601043243-2342" [0b2a261d-dcd5-4705-b2fb-db51ba34d827] Running
	I0601 04:33:50.971138   15168 system_pods.go:61] "kube-proxy-br6ph" [788e299a-04d3-43a8-bf6b-c0e52acbcd4a] Running
	I0601 04:33:50.971142   15168 system_pods.go:61] "kube-scheduler-newest-cni-20220601043243-2342" [16e28b92-b394-42e6-bed5-ea1917414ae2] Running
	I0601 04:33:50.971146   15168 system_pods.go:61] "metrics-server-b955d9d8-9qrh2" [37627389-19ca-44a3-b5a8-a0aff226824d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:33:50.971153   15168 system_pods.go:61] "storage-provisioner" [ab053075-62ac-43ac-b212-ba5bfef0faef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 04:33:50.971157   15168 system_pods.go:74] duration metric: took 7.751202ms to wait for pod list to return data ...
	I0601 04:33:50.971164   15168 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:33:50.975532   15168 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:33:50.975549   15168 node_conditions.go:123] node cpu capacity is 6
	I0601 04:33:50.975561   15168 node_conditions.go:105] duration metric: took 4.393013ms to run NodePressure ...
	I0601 04:33:50.975577   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 04:33:51.213450   15168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 04:33:51.224417   15168 ops.go:34] apiserver oom_adj: -16
	I0601 04:33:51.224430   15168 kubeadm.go:630] restartCluster took 10.691695543s
	I0601 04:33:51.224437   15168 kubeadm.go:397] StartCluster complete in 10.729042681s
	I0601 04:33:51.224455   15168 settings.go:142] acquiring lock: {Name:mk9461222f93f83c395ca7448cab2c54595d0faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:33:51.224559   15168 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 04:33:51.225197   15168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk5db4f22c4adef48a3a610ba6cc6bc82fdfe595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 04:33:51.229821   15168 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220601043243-2342" rescaled to 1
	I0601 04:33:51.229862   15168 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 04:33:51.229891   15168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 04:33:51.251907   15168 out.go:177] * Verifying Kubernetes components...
	I0601 04:33:51.229903   15168 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 04:33:51.230072   15168 config.go:178] Loaded profile config "newest-cni-20220601043243-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 04:33:51.294534   15168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 04:33:51.294549   15168 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220601043243-2342"
	I0601 04:33:51.294558   15168 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220601043243-2342"
	I0601 04:33:51.294565   15168 addons.go:65] Setting dashboard=true in profile "newest-cni-20220601043243-2342"
	I0601 04:33:51.294576   15168 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220601043243-2342"
	I0601 04:33:51.294582   15168 addons.go:153] Setting addon dashboard=true in "newest-cni-20220601043243-2342"
	W0601 04:33:51.294589   15168 addons.go:165] addon metrics-server should already be in state true
	I0601 04:33:51.294592   15168 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220601043243-2342"
	W0601 04:33:51.294604   15168 addons.go:165] addon storage-provisioner should already be in state true
	I0601 04:33:51.294553   15168 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220601043243-2342"
	W0601 04:33:51.294616   15168 addons.go:165] addon dashboard should already be in state true
	I0601 04:33:51.294629   15168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220601043243-2342"
	I0601 04:33:51.294668   15168 host.go:66] Checking if "newest-cni-20220601043243-2342" exists ...
	I0601 04:33:51.294672   15168 host.go:66] Checking if "newest-cni-20220601043243-2342" exists ...
	I0601 04:33:51.294679   15168 host.go:66] Checking if "newest-cni-20220601043243-2342" exists ...
	I0601 04:33:51.295056   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.295147   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.295159   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.295281   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.425427   15168 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220601043243-2342"
	I0601 04:33:51.476401   15168 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	W0601 04:33:51.476435   15168 addons.go:165] addon default-storageclass should already be in state true
	I0601 04:33:51.455598   15168 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 04:33:51.534679   15168 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 04:33:51.645866   15168 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 04:33:51.513660   15168 host.go:66] Checking if "newest-cni-20220601043243-2342" exists ...
	I0601 04:33:51.571610   15168 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:33:51.608663   15168 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 04:33:51.667451   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 04:33:51.667499   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 04:33:51.667521   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 04:33:51.667530   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 04:33:51.667553   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.667572   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.667577   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.671586   15168 cli_runner.go:164] Run: docker container inspect newest-cni-20220601043243-2342 --format={{.State.Status}}
	I0601 04:33:51.681072   15168 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 04:33:51.681150   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.875389   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:51.876872   15168 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 04:33:51.876905   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 04:33:51.877044   15168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601043243-2342
	I0601 04:33:51.877013   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:51.877104   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:51.879062   15168 api_server.go:51] waiting for apiserver process to appear ...
	I0601 04:33:51.879531   15168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 04:33:51.896460   15168 api_server.go:71] duration metric: took 666.56763ms to wait for apiserver process to appear ...
	I0601 04:33:51.896493   15168 api_server.go:87] waiting for apiserver healthz status ...
	I0601 04:33:51.896513   15168 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55529/healthz ...
	I0601 04:33:51.905685   15168 api_server.go:266] https://127.0.0.1:55529/healthz returned 200:
	ok
	I0601 04:33:51.907893   15168 api_server.go:140] control plane version: v1.23.6
	I0601 04:33:51.907906   15168 api_server.go:130] duration metric: took 11.403704ms to wait for apiserver health ...
	I0601 04:33:51.907913   15168 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 04:33:51.917780   15168 system_pods.go:59] 9 kube-system pods found
	I0601 04:33:51.917804   15168 system_pods.go:61] "coredns-64897985d-blq67" [ded91fd2-d2c9-4420-9f11-7eab7d7a70cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 04:33:51.917831   15168 system_pods.go:61] "coredns-64897985d-svsmk" [d6d0a06b-bb5a-461b-99d5-7b2fd6320947] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 04:33:51.917861   15168 system_pods.go:61] "etcd-newest-cni-20220601043243-2342" [5d33aabb-0215-438c-ad10-61ba084cc15f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 04:33:51.917884   15168 system_pods.go:61] "kube-apiserver-newest-cni-20220601043243-2342" [8c56d510-5f64-431d-8954-8c3cf47404a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 04:33:51.917903   15168 system_pods.go:61] "kube-controller-manager-newest-cni-20220601043243-2342" [0b2a261d-dcd5-4705-b2fb-db51ba34d827] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 04:33:51.917910   15168 system_pods.go:61] "kube-proxy-br6ph" [788e299a-04d3-43a8-bf6b-c0e52acbcd4a] Running
	I0601 04:33:51.917947   15168 system_pods.go:61] "kube-scheduler-newest-cni-20220601043243-2342" [16e28b92-b394-42e6-bed5-ea1917414ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 04:33:51.917958   15168 system_pods.go:61] "metrics-server-b955d9d8-9qrh2" [37627389-19ca-44a3-b5a8-a0aff226824d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 04:33:51.917967   15168 system_pods.go:61] "storage-provisioner" [ab053075-62ac-43ac-b212-ba5bfef0faef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 04:33:51.917973   15168 system_pods.go:74] duration metric: took 10.05572ms to wait for pod list to return data ...
	I0601 04:33:51.917981   15168 default_sa.go:34] waiting for default service account to be created ...
	I0601 04:33:51.922534   15168 default_sa.go:45] found service account: "default"
	I0601 04:33:51.922552   15168 default_sa.go:55] duration metric: took 4.563505ms for default service account to be created ...
	I0601 04:33:51.922563   15168 kubeadm.go:572] duration metric: took 692.673499ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0601 04:33:51.922586   15168 node_conditions.go:102] verifying NodePressure condition ...
	I0601 04:33:51.926230   15168 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 04:33:51.926242   15168 node_conditions.go:123] node cpu capacity is 6
	I0601 04:33:51.926250   15168 node_conditions.go:105] duration metric: took 3.660356ms to run NodePressure ...
	I0601 04:33:51.926258   15168 start.go:213] waiting for startup goroutines ...
	I0601 04:33:51.969132   15168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601043243-2342/id_rsa Username:docker}
	I0601 04:33:51.996898   15168 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 04:33:51.996917   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 04:33:52.004032   15168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 04:33:52.010449   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 04:33:52.010463   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 04:33:52.021734   15168 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 04:33:52.021755   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 04:33:52.080927   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 04:33:52.080944   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 04:33:52.096221   15168 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:33:52.096238   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 04:33:52.103028   15168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 04:33:52.108213   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 04:33:52.108227   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 04:33:52.120168   15168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 04:33:52.187777   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 04:33:52.187789   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 04:33:52.218467   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 04:33:52.218483   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 04:33:52.297544   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 04:33:52.297560   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 04:33:52.387473   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 04:33:52.387493   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 04:33:52.414422   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 04:33:52.414440   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 04:33:52.436622   15168 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:33:52.436636   15168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 04:33:52.497411   15168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 04:33:53.406044   15168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.401973256s)
	I0601 04:33:53.406130   15168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.303070551s)
	I0601 04:33:53.423155   15168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.302944456s)
	I0601 04:33:53.423183   15168 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220601043243-2342"
	I0601 04:33:53.599892   15168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.102440107s)
	I0601 04:33:53.662455   15168 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 04:33:53.699587   15168 addons.go:417] enableAddons completed in 2.469643476s
	I0601 04:33:53.740112   15168 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 04:33:53.763371   15168 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220601043243-2342" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 11:33:37 UTC, end at Wed 2022-06-01 11:34:40 UTC. --
	Jun 01 11:33:53 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:53.204687409Z" level=info msg="ignoring event" container=67f09f21ea542ef9128efe11161e225c52c0035f538d20c23dac884dbb16e46b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:33:54 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:54.065100759Z" level=info msg="ignoring event" container=04abef08f6c55b6198c5cdf01fe458ce6553a8bd49a47869da4fc17093f64180 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:33:54 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:54.096951128Z" level=info msg="ignoring event" container=6320de88ab7ddcf369b3cca05861794f3c8a7bb27f693f290546bc4900757e42 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:33:54 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:54.918483094Z" level=info msg="ignoring event" container=6f903ec59dcd7801b90689f2fe546920ee0676673c4ca3e91eeb5f4d36c81caa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:33:54 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:33:54.933818051Z" level=info msg="ignoring event" container=09516534a86ed4fb769a9c14c2ce08c4be1844af19292fc7db2b73b3e2efa61d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:28 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:28.583995632Z" level=info msg="ignoring event" container=1c722764e34ec47232932eff7431c538c08d84f7ff6d6000ec390b51a1541a1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:32 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:32.595846366Z" level=info msg="ignoring event" container=849caf6623056263941b8dffb6c02926eadb7bf863715e029f8e44f0aa46dec5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:33.474902837Z" level=info msg="ignoring event" container=5fb110a9ca2e4ca8a7158373c82e0f0a063e6aade8c106d75ea0f24ca8d5a467 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:33.486597920Z" level=info msg="ignoring event" container=351a206f55d7db84bc43a2624b12a475b72daa360ccab02ee82dbf99e45bf79d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:33.637395720Z" level=info msg="ignoring event" container=f990d53ad715b0efbfe843dce04344d1db4a6f5f016d79bfb1b37a575a754ef5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:33 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:33.929064035Z" level=info msg="ignoring event" container=09a50a2675377d826699aa095df1d07b7a037fd70925ae81da35f92dd280ecf9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:35 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:35.209646945Z" level=info msg="ignoring event" container=c76c29b9a3b2d77008e01bf9524d1877273ad3dbbc38a20cf28b0ef6a5981772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:35 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:35.210802576Z" level=info msg="ignoring event" container=5f124cd30a13734e2c4f3f7d120cb41e8766f0850307e7088a3db1453637b2ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:35 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:35.393749046Z" level=info msg="ignoring event" container=ee7a28a85c4f4c39c94fcf27fbaa9aea5f3d6285578004ce367faa7e0c057a51 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:35 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:35.473005019Z" level=info msg="ignoring event" container=78488b815b6fcc6e53643544c4737ec926c88d744166d58655fa9662176f4e7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:37.055113276Z" level=info msg="ignoring event" container=2649efd0c7a4989c09864fa5f1995023fbc3416d1cbc897d3ade9daf6182df09 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:37 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:37.097623859Z" level=info msg="ignoring event" container=e34fb5a64af8f2c7d4c6f8028e16f1e898c5b5ff848d6cc2f0e5ae43b7fcb006 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:38 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:38.085674376Z" level=info msg="ignoring event" container=7e4fbec0f2985fd3c044d7434e388e71ae94cc36229de060e9aa9fc595d7f43a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:38 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:38.176728947Z" level=info msg="ignoring event" container=85eda7e7b475b5b4630727f1105e12c7e04cb48d9fb63e12f8a23cac4f4fdb86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:38 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:38.507953992Z" level=info msg="ignoring event" container=6da8915ed85a014c4206da0738c7537678e7cd3227b11bee17f0ac24212dca5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:38 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:38.538096913Z" level=info msg="ignoring event" container=c453010bc789e07d8c828816ef145df1d4fb362d40b212827dd2d8b6a014aaa3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:39 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:39.898741787Z" level=info msg="ignoring event" container=1c49ac74707db4865a09f89c40d432607175383e08f836a15057615f9cc263bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:39 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:39.981027120Z" level=info msg="ignoring event" container=e85033f4cafd6f33621e3f0c59d50c068b9d528299624c1688a8e17f8799ca34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:40 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:40.010522320Z" level=info msg="ignoring event" container=a0413cb57e801cdeea694f3d730ecd252f932e72366edfc0814856c81c03fa7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 11:34:40 newest-cni-20220601043243-2342 dockerd[131]: time="2022-06-01T11:34:40.103649231Z" level=info msg="ignoring event" container=79c8e4ac7d5bbe0b4ea60ab685be28af727b8312cbaa5738c505153a7f38538c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	394e5e2706bab       6e38f40d628db       8 seconds ago        Running             storage-provisioner       2                   3a2241b2b8da0
	1c722764e34ec       6e38f40d628db       49 seconds ago       Exited              storage-provisioner       1                   3a2241b2b8da0
	1aefc60a1e6d6       4c03754524064       49 seconds ago       Running             kube-proxy                1                   a0f21af05d39d
	7fcc1256c3e3e       25f8c7f3da61c       55 seconds ago       Running             etcd                      1                   99956a7b8567f
	29ba40796a3f6       df7b72818ad2e       55 seconds ago       Running             kube-controller-manager   1                   f2aee1dffb5f5
	06ffd68bc7fcb       8fa62c12256df       55 seconds ago       Running             kube-apiserver            1                   3b870b79e79d6
	3d239524ff600       595f327f224a4       55 seconds ago       Running             kube-scheduler            1                   bfbbd9c597142
	dbd0a440cdbad       4c03754524064       About a minute ago   Exited              kube-proxy                0                   e696f119d3b9a
	167c5c91c4994       595f327f224a4       About a minute ago   Exited              kube-scheduler            0                   6c3cf6adcbfe1
	5eefec557a4a2       8fa62c12256df       About a minute ago   Exited              kube-apiserver            0                   23cd8f73e35da
	f583206f062ef       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   1a71bae23aeb8
	5c2a8150bc256       df7b72818ad2e       About a minute ago   Exited              kube-controller-manager   0                   a101a38066516
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220601043243-2342
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220601043243-2342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=newest-cni-20220601043243-2342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T04_33_06_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:33:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220601043243-2342
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:34:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:34:29 +0000   Wed, 01 Jun 2022 11:33:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:34:29 +0000   Wed, 01 Jun 2022 11:33:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:34:29 +0000   Wed, 01 Jun 2022 11:33:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 11:34:29 +0000   Wed, 01 Jun 2022 11:34:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    newest-cni-20220601043243-2342
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                4b8a245a-d54b-4f27-a340-95d267bdc6d0
	  Boot ID:                    f65ff030-0ce1-451f-b056-a175624cc17c
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-blq67                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     81s
	  kube-system                 etcd-newest-cni-20220601043243-2342                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                 kube-apiserver-newest-cni-20220601043243-2342             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-newest-cni-20220601043243-2342    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-br6ph                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-scheduler-newest-cni-20220601043243-2342             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 metrics-server-b955d9d8-9qrh2                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-x7xtc                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-762k5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 80s                  kube-proxy  
	  Normal  Starting                 49s                  kube-proxy  
	  Normal  NodeHasSufficientPID     101s (x4 over 101s)  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    101s (x5 over 101s)  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  101s (x5 over 101s)  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientMemory
	  Normal  Starting                 94s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  94s                  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s                  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s                  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  93s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                83s                  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x7 over 55s)    kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s                  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s                  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s                  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             11s                  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  11s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                11s                  kubelet     Node newest-cni-20220601043243-2342 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [7fcc1256c3e3] <==
	* {"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2022-06-01T11:33:48.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-01T11:33:48.008Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:newest-cni-20220601043243-2342 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:33:48.008Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:33:48.008Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:33:48.008Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:33:48.008Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:33:48.009Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:33:48.009Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:33:51.672Z","caller":"traceutil/trace.go:171","msg":"trace[2010815828] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:565; }","duration":"187.081734ms","start":"2022-06-01T11:33:51.485Z","end":"2022-06-01T11:33:51.672Z","steps":["trace[2010815828] 'read index received'  (duration: 187.072759ms)","trace[2010815828] 'applied index is now lower than readState.Index'  (duration: 7.983µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:33:51.675Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"163.525805ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:758"}
	{"level":"info","ts":"2022-06-01T11:33:51.675Z","caller":"traceutil/trace.go:171","msg":"trace[2117314095] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:536; }","duration":"163.619537ms","start":"2022-06-01T11:33:51.512Z","end":"2022-06-01T11:33:51.675Z","steps":["trace[2117314095] 'agreement among raft nodes before linearized reading'  (duration: 163.449501ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T11:34:33.100Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"131.875864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T11:34:33.100Z","caller":"traceutil/trace.go:171","msg":"trace[1153500662] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:638; }","duration":"131.942448ms","start":"2022-06-01T11:34:32.968Z","end":"2022-06-01T11:34:33.100Z","steps":["trace[1153500662] 'range keys from in-memory index tree'  (duration: 131.820865ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T11:34:33.368Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"130.136042ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128013397826101224 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/etcd-newest-cni-20220601043243-2342\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-newest-cni-20220601043243-2342\" value_size:3964 >> failure:<>>","response":"size:5"}
	{"level":"info","ts":"2022-06-01T11:34:33.368Z","caller":"traceutil/trace.go:171","msg":"trace[1938680562] transaction","detail":"{read_only:false; number_of_response:0; response_revision:640; }","duration":"130.81007ms","start":"2022-06-01T11:34:33.237Z","end":"2022-06-01T11:34:33.368Z","steps":["trace[1938680562] 'compare'  (duration: 130.100863ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T11:34:36.989Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"145.260718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-b955d9d8-9qrh2.16f47bf8dcebe985\" ","response":"range_response_count:1 size:722"}
	{"level":"info","ts":"2022-06-01T11:34:36.989Z","caller":"traceutil/trace.go:171","msg":"trace[666648184] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-b955d9d8-9qrh2.16f47bf8dcebe985; range_end:; response_count:1; response_revision:660; }","duration":"145.420321ms","start":"2022-06-01T11:34:36.843Z","end":"2022-06-01T11:34:36.989Z","steps":["trace[666648184] 'agreement among raft nodes before linearized reading'  (duration: 83.445243ms)","trace[666648184] 'range keys from in-memory index tree'  (duration: 61.782374ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T11:34:40.339Z","caller":"traceutil/trace.go:171","msg":"trace[81540992] linearizableReadLoop","detail":"{readStateIndex:732; appliedIndex:732; }","duration":"111.138143ms","start":"2022-06-01T11:34:40.227Z","end":"2022-06-01T11:34:40.339Z","steps":["trace[81540992] 'read index received'  (duration: 111.13069ms)","trace[81540992] 'applied index is now lower than readState.Index'  (duration: 6.207µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:34:40.339Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.258886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-x7xtc.16f47bf970585ca7\" ","response":"range_response_count:1 size:788"}
	{"level":"info","ts":"2022-06-01T11:34:40.339Z","caller":"traceutil/trace.go:171","msg":"trace[2055724444] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-x7xtc.16f47bf970585ca7; range_end:; response_count:1; response_revision:689; }","duration":"111.288727ms","start":"2022-06-01T11:34:40.227Z","end":"2022-06-01T11:34:40.339Z","steps":["trace[2055724444] 'agreement among raft nodes before linearized reading'  (duration: 111.229475ms)"],"step_count":1}
	
	* 
	* ==> etcd [f583206f062e] <==
	* {"level":"info","ts":"2022-06-01T11:33:01.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T11:33:01.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-01T11:33:01.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:01.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:01.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:01.149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T11:33:01.149Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:33:01.150Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:33:01.150Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:33:01.150Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:33:01.150Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:newest-cni-20220601043243-2342 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:33:01.150Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:33:01.151Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:33:01.151Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:33:01.152Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T11:33:01.154Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:33:01.154Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T11:33:23.235Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-01T11:33:23.236Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"newest-cni-20220601043243-2342","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/06/01 11:33:23 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/01 11:33:23 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-01T11:33:23.319Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-06-01T11:33:23.320Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:33:23.321Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T11:33:23.321Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"newest-cni-20220601043243-2342","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  11:34:41 up  1:15,  0 users,  load average: 3.16, 1.37, 0.98
	Linux newest-cni-20220601043243-2342 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [06ffd68bc7fc] <==
	* I0601 11:33:49.591742       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 11:33:49.592441       1 cache.go:39] Caches are synced for autoregister controller
	I0601 11:33:49.592644       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 11:33:49.593431       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 11:33:49.593434       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 11:33:50.436974       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 11:33:50.437028       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 11:33:50.442687       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	W0601 11:33:50.622692       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 11:33:50.622747       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 11:33:50.622753       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 11:33:51.107725       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:33:51.116355       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:33:51.144995       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:33:51.194893       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:33:51.200342       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:33:51.484754       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:33:53.309518       1 controller.go:611] quota admission added evaluator for: namespaces
	I0601 11:33:53.532410       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.103.252.105]
	I0601 11:33:53.589843       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.106.137.196]
	I0601 11:34:28.981315       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:34:28.981315       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 11:34:28.989391       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:34:29.015427       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [5eefec557a4a] <==
	* W0601 11:33:24.238569       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238642       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238662       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238682       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238713       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238691       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238745       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238746       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238756       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238867       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238875       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238905       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.238934       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239024       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239381       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239457       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239488       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239392       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239421       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239560       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239733       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239746       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.239871       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.240105       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 11:33:24.241861       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [29ba40796a3f] <==
	* I0601 11:34:28.995352       1 shared_informer.go:247] Caches are synced for job 
	I0601 11:34:28.995777       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0601 11:34:29.001239       1 shared_informer.go:247] Caches are synced for deployment 
	I0601 11:34:29.001279       1 shared_informer.go:247] Caches are synced for disruption 
	I0601 11:34:29.001288       1 disruption.go:371] Sending events to api server.
	I0601 11:34:29.002787       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:34:29.006856       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 11:34:29.012890       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0601 11:34:29.012947       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 11:34:29.013169       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 11:34:29.013276       1 shared_informer.go:247] Caches are synced for taint 
	I0601 11:34:29.013355       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0601 11:34:29.013417       1 node_lifecycle_controller.go:1012] Missing timestamp for Node newest-cni-20220601043243-2342. Assuming now as a timestamp.
	I0601 11:34:29.013462       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0601 11:34:29.013638       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0601 11:34:29.013819       1 event.go:294] "Event occurred" object="newest-cni-20220601043243-2342" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220601043243-2342 event: Registered Node newest-cni-20220601043243-2342 in Controller"
	I0601 11:34:29.019269       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	I0601 11:34:29.021890       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0601 11:34:29.070668       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:34:29.078538       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-762k5"
	I0601 11:34:29.082338       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-x7xtc"
	I0601 11:34:29.092332       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:34:29.494213       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:34:29.573463       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:34:29.573480       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [5c2a8150bc25] <==
	* I0601 11:33:18.486094       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 11:33:18.486176       1 shared_informer.go:247] Caches are synced for TTL 
	I0601 11:33:18.486189       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0601 11:33:18.486198       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0601 11:33:18.488268       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 11:33:18.514332       1 shared_informer.go:247] Caches are synced for disruption 
	I0601 11:33:18.514353       1 disruption.go:371] Sending events to api server.
	I0601 11:33:18.535879       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 11:33:18.538627       1 shared_informer.go:247] Caches are synced for cronjob 
	I0601 11:33:18.591321       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:33:18.688423       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:33:18.737799       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 11:33:19.105173       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:33:19.140236       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 11:33:19.185960       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:33:19.186005       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:33:19.391665       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-br6ph"
	I0601 11:33:19.490832       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-blq67"
	I0601 11:33:19.496846       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-svsmk"
	I0601 11:33:19.679027       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:33:19.682565       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-svsmk"
	I0601 11:33:22.512334       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0601 11:33:22.516902       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0601 11:33:22.521522       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0601 11:33:22.528273       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-9qrh2"
	
	* 
	* ==> kube-proxy [1aefc60a1e6d] <==
	* I0601 11:33:51.384851       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:33:51.384911       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:33:51.384934       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:33:51.427297       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:33:51.427365       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:33:51.427374       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:33:51.427387       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:33:51.428576       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:33:51.429073       1 config.go:317] "Starting service config controller"
	I0601 11:33:51.429111       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:33:51.431360       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:33:51.431391       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:33:51.431400       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:33:51.529762       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [dbd0a440cdba] <==
	* I0601 11:33:20.322155       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 11:33:20.322203       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 11:33:20.322285       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:33:20.413521       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:33:20.413543       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 11:33:20.413548       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 11:33:20.413559       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 11:33:20.414014       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:33:20.416197       1 config.go:317] "Starting service config controller"
	I0601 11:33:20.416276       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:33:20.416370       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:33:20.416378       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:33:20.517085       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:33:20.517123       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [167c5c91c499] <==
	* E0601 11:33:03.643574       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:33:03.643602       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:33:03.643628       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:33:03.643751       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:33:03.643780       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:33:03.643830       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:33:03.643838       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:33:03.644727       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 11:33:03.644758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 11:33:03.646310       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:33:03.646367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:33:04.530495       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 11:33:04.530517       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 11:33:04.594745       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:33:04.594783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:33:04.624004       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:33:04.624041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:33:04.662780       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:33:04.662826       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:33:04.773035       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:33:04.773075       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0601 11:33:07.840123       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0601 11:33:23.228989       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 11:33:23.229762       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0601 11:33:23.229981       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kube-scheduler [3d239524ff60] <==
	* W0601 11:33:46.221296       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0601 11:33:46.949448       1 serving.go:348] Generated self-signed cert in-memory
	W0601 11:33:49.469499       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0601 11:33:49.469539       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 11:33:49.469546       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0601 11:33:49.469550       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0601 11:33:49.511272       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0601 11:33:49.513487       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0601 11:33:49.513557       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 11:33:49.513564       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0601 11:33:49.513601       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0601 11:33:49.616068       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 11:33:37 UTC, end at Wed 2022-06-01 11:34:43 UTC. --
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.200002    3814 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.39 -j CNI-db9cf18b7b90ff89eb228481 -m comment --comment name: \"crio\" id: \"a61cfee7dc4118e26837476bc14230bf97bfe4d2002b261e1482e99f952b1225\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-db9cf18b7b90ff89eb228481':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-x7xtc" podSandboxID={Type:docker ID:a61cfee7dc4118e26837476bc14230bf97bfe4d2002b261e1482e99f952b1225} podNetnsPath="/proc/8180/ns/net" networkType="bridge" networkName="crio"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.207159    3814 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.40 -j CNI-6164f86d2e5cd2702bd08c2b -m comment --comment name: \"crio\" id: \"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-6164f86d2e5cd2702bd08c2b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/metrics-server-b955d9d8-9qrh2" podSandboxID={Type:docker ID:861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0} podNetnsPath="/proc/8181/ns/net" networkType="bridge" networkName="crio"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.514089    3814 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"a61cfee7dc4118e26837476bc14230bf97bfe4d2002b261e1482e99f952b1225\" network for pod \"dashboard-metrics-scraper-56974995fc-x7xtc\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"a61cfee7dc4118e26837476bc14230bf97bfe4d2002b261e1482e99f952b1225\" network for pod \"dashboard-metrics-scraper-56974995fc-x7xtc\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.39 -j CNI-db9cf18b7b90ff89eb228481 -m comment --comment name: \"crio\" id: \"a61cfee7dc4118e26837476
bc14230bf97bfe4d2002b261e1482e99f952b1225\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-db9cf18b7b90ff89eb228481':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.514152    3814 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"a61cfee7dc4118e26837476bc14230bf97bfe4d2002b261e1482e99f952b1225\" network for pod \"dashboard-metrics-scraper-56974995fc-x7xtc\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"a61cfee7dc4118e26837476bc14230bf97bfe4d2002b261e1482e99f952b1225\" network for pod \"dashboard-metrics-scraper-56974995fc-x7xtc\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.39 -j CNI-db9cf18b7b90ff89eb228481 -m comment --comment name: \"crio\" id: \"a61cfee7dc4118e26837476bc142
30bf97bfe4d2002b261e1482e99f952b1225\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-db9cf18b7b90ff89eb228481':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-x7xtc"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.514178    3814 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"a61cfee7dc4118e26837476bc14230bf97bfe4d2002b261e1482e99f952b1225\" network for pod \"dashboard-metrics-scraper-56974995fc-x7xtc\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"a61cfee7dc4118e26837476bc14230bf97bfe4d2002b261e1482e99f952b1225\" network for pod \"dashboard-metrics-scraper-56974995fc-x7xtc\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.39 -j CNI-db9cf18b7b90ff89eb228481 -m comment --comment name: \"crio\" id: \"a61cfee7dc4118e26837476bc142
30bf97bfe4d2002b261e1482e99f952b1225\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-db9cf18b7b90ff89eb228481':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-x7xtc"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.514184    3814 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\" network for pod \"coredns-64897985d-blq67\": networkPlugin cni failed to set up pod \"coredns-64897985d-blq67_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\" network for pod \"coredns-64897985d-blq67\": networkPlugin cni failed to teardown pod \"coredns-64897985d-blq67_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.38 -j CNI-9be422a96f9e076c46689fa7 -m comment --comment name: \"crio\" id: \"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\" --wait]: exit status 2: iptables v1.8.4 (legacy):
Couldn't load target `CNI-9be422a96f9e076c46689fa7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.514213    3814 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\" network for pod \"coredns-64897985d-blq67\": networkPlugin cni failed to set up pod \"coredns-64897985d-blq67_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\" network for pod \"coredns-64897985d-blq67\": networkPlugin cni failed to teardown pod \"coredns-64897985d-blq67_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.38 -j CNI-9be422a96f9e076c46689fa7 -m comment --comment name: \"crio\" id: \"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-9be422a96f9e076c46689fa7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-blq67"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.514233    3814 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\" network for pod \"coredns-64897985d-blq67\": networkPlugin cni failed to set up pod \"coredns-64897985d-blq67_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\" network for pod \"coredns-64897985d-blq67\": networkPlugin cni failed to teardown pod \"coredns-64897985d-blq67_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.38 -j CNI-9be422a96f9e076c46689fa7 -m comment --comment name: \"crio\" id: \"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-9be422a96f9e076c46689fa7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-blq67"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.514257    3814 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\" network for pod \"metrics-server-b955d9d8-9qrh2\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-9qrh2_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\" network for pod \"metrics-server-b955d9d8-9qrh2\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-9qrh2_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.40 -j CNI-6164f86d2e5cd2702bd08c2b -m comment --comment name: \"crio\" id: \"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\" --wait]: exit status 2: ip
tables v1.8.4 (legacy): Couldn't load target `CNI-6164f86d2e5cd2702bd08c2b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.514272    3814 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-blq67_kube-system(ded91fd2-d2c9-4420-9f11-7eab7d7a70cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-blq67_kube-system(ded91fd2-d2c9-4420-9f11-7eab7d7a70cf)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\\\" network for pod \\\"coredns-64897985d-blq67\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-blq67_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\\\" network for pod \\\"coredns-64897985d-blq67\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-blq67_kube-syste
m\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.38 -j CNI-9be422a96f9e076c46689fa7 -m comment --comment name: \\\"crio\\\" id: \\\"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-9be422a96f9e076c46689fa7':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-blq67" podUID=ded91fd2-d2c9-4420-9f11-7eab7d7a70cf
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.514280    3814 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\" network for pod \"metrics-server-b955d9d8-9qrh2\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-9qrh2_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\" network for pod \"metrics-server-b955d9d8-9qrh2\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-9qrh2_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.40 -j CNI-6164f86d2e5cd2702bd08c2b -m comment --comment name: \"crio\" id: \"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-6164f86d2e5cd2702bd08c2b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-9qrh2"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.514234    3814 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard(6c5e0a54-21da-429a-af54-5f8116aadef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard(6c5e0a54-21da-429a-af54-5f8116aadef1)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"a61cfee7dc4118e26837476bc14230bf97bfe4d2002b261e1482e99f952b1225\\\" network for pod \\\"dashboard-metrics-scraper-56974995fc-x7xtc\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"a61cfee7dc4118e26837476bc14230bf97bfe4d2002b261e1482e99f952b1225\\\" network for pod \\\"dashb
oard-metrics-scraper-56974995fc-x7xtc\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.39 -j CNI-db9cf18b7b90ff89eb228481 -m comment --comment name: \\\"crio\\\" id: \\\"a61cfee7dc4118e26837476bc14230bf97bfe4d2002b261e1482e99f952b1225\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-db9cf18b7b90ff89eb228481':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-x7xtc" podUID=6c5e0a54-21da-429a-af54-5f8116aadef1
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.514298    3814 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\" network for pod \"metrics-server-b955d9d8-9qrh2\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-9qrh2_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\" network for pod \"metrics-server-b955d9d8-9qrh2\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-9qrh2_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.40 -j CNI-6164f86d2e5cd2702bd08c2b -m comment --comment name: \"crio\" id: \"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-6164f86d2e5cd2702bd08c2b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-9qrh2"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:42.514331    3814 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-b955d9d8-9qrh2_kube-system(37627389-19ca-44a3-b5a8-a0aff226824d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-b955d9d8-9qrh2_kube-system(37627389-19ca-44a3-b5a8-a0aff226824d)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\\\" network for pod \\\"metrics-server-b955d9d8-9qrh2\\\": networkPlugin cni failed to set up pod \\\"metrics-server-b955d9d8-9qrh2_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\\\" network for pod \\\"metrics-server-b955d9d8-9qrh2\\\": networkPlugin cni failed to teardown pod \\\"metr
ics-server-b955d9d8-9qrh2_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.40 -j CNI-6164f86d2e5cd2702bd08c2b -m comment --comment name: \\\"crio\\\" id: \\\"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-6164f86d2e5cd2702bd08c2b':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-b955d9d8-9qrh2" podUID=37627389-19ca-44a3-b5a8-a0aff226824d
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:42.515144    3814 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"dashboard-metrics-scraper-56974995fc-x7xtc_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a61cfee7dc4118e26837476bc14230bf97bfe4d2002b261e1482e99f952b1225\""
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:42.523436    3814 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="04526c7c4048f1cf8b5dee4bc7cc011e430dac7d003d29371928b68b1f242cbf"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:42.526871    3814 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"metrics-server-b955d9d8-9qrh2_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\""
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:42.534697    3814 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"861fa97a4e803485b6c8e4dc20bf7fe6226f70ee4a9e4e683cc0da39204212e0\""
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:42.535639    3814 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"kubernetes-dashboard-8469778f77-762k5_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"48ce509420008bcc6764dfc5b270337a7f1e436269a1f52c37bc950c1b73f68a\""
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:42.539605    3814 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="48ce509420008bcc6764dfc5b270337a7f1e436269a1f52c37bc950c1b73f68a"
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:42.541149    3814 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"48ce509420008bcc6764dfc5b270337a7f1e436269a1f52c37bc950c1b73f68a\""
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:42.543281    3814 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-blq67_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\""
	Jun 01 11:34:42 newest-cni-20220601043243-2342 kubelet[3814]: I0601 11:34:42.551914    3814 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"da1de20a6fca0c374b98446ecde81905487598ad77537ec7e145877d9e3fbeac\""
	Jun 01 11:34:43 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:43.194415    3814 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/metrics-server-b955d9d8-9qrh2" podSandboxID={Type:docker ID:2eab330b21dc108612b3fd233a09f99885654d6c2bf4506cb27ecbb20fc949a6} podNetnsPath="/proc/8657/ns/net" networkType="bridge" networkName="crio"
	Jun 01 11:34:43 newest-cni-20220601043243-2342 kubelet[3814]: E0601 11:34:43.232250    3814 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.41 -j CNI-541bf233cc7a018d8c83b657 -m comment --comment name: \"crio\" id: \"2eab330b21dc108612b3fd233a09f99885654d6c2bf4506cb27ecbb20fc949a6\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-541bf233cc7a018d8c83b657':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/metrics-server-b955d9d8-9qrh2" podSandboxID={Type:docker ID:2eab330b21dc108612b3fd233a09f99885654d6c2bf4506cb27ecbb20fc949a6} podNetnsPath="/proc/8657/ns/net" networkType="bridge" networkName="crio"
	
	* 
	* ==> storage-provisioner [1c722764e34e] <==
	* I0601 11:33:51.302463       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0601 11:34:28.409599       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	
	* 
	* ==> storage-provisioner [394e5e2706ba] <==
	* I0601 11:34:33.239384       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 11:34:33.268398       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 11:34:33.268693       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220601043243-2342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-blq67 metrics-server-b955d9d8-9qrh2 dashboard-metrics-scraper-56974995fc-x7xtc kubernetes-dashboard-8469778f77-762k5
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220601043243-2342 describe pod coredns-64897985d-blq67 metrics-server-b955d9d8-9qrh2 dashboard-metrics-scraper-56974995fc-x7xtc kubernetes-dashboard-8469778f77-762k5
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220601043243-2342 describe pod coredns-64897985d-blq67 metrics-server-b955d9d8-9qrh2 dashboard-metrics-scraper-56974995fc-x7xtc kubernetes-dashboard-8469778f77-762k5: exit status 1 (204.570504ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-blq67" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-9qrh2" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-x7xtc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-762k5" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220601043243-2342 describe pod coredns-64897985d-blq67 metrics-server-b955d9d8-9qrh2 dashboard-metrics-scraper-56974995fc-x7xtc kubernetes-dashboard-8469778f77-762k5: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (50.03s)

                                                
                                    

Test pass (249/288)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.25
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.62
10 TestDownloadOnly/v1.23.6/json-events 5.76
11 TestDownloadOnly/v1.23.6/preload-exists 0
14 TestDownloadOnly/v1.23.6/kubectl 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.77
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.43
18 TestDownloadOnlyKic 4.33
19 TestBinaryMirror 1.67
20 TestOffline 46.83
22 TestAddons/Setup 107.85
26 TestAddons/parallel/MetricsServer 5.7
27 TestAddons/parallel/HelmTiller 13.24
29 TestAddons/parallel/CSI 44.94
31 TestAddons/serial/GCPAuth 15.36
32 TestAddons/StoppedEnableDisable 13.04
33 TestCertOptions 30.69
34 TestCertExpiration 215.42
35 TestDockerFlags 27.18
36 TestForceSystemdFlag 234.75
37 TestForceSystemdEnv 30.67
39 TestHyperKitDriverInstallOrUpdate 5.57
42 TestErrorSpam/setup 25.12
43 TestErrorSpam/start 2.37
44 TestErrorSpam/status 1.35
45 TestErrorSpam/pause 1.91
46 TestErrorSpam/unpause 2.01
47 TestErrorSpam/stop 13.21
50 TestFunctional/serial/CopySyncFile 0
51 TestFunctional/serial/StartWithProxy 42.02
52 TestFunctional/serial/AuditLog 0
53 TestFunctional/serial/SoftStart 6.39
54 TestFunctional/serial/KubeContext 0.03
55 TestFunctional/serial/KubectlGetPods 1.47
58 TestFunctional/serial/CacheCmd/cache/add_remote 4.06
59 TestFunctional/serial/CacheCmd/cache/add_local 1.83
60 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
61 TestFunctional/serial/CacheCmd/cache/list 0.07
62 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.46
63 TestFunctional/serial/CacheCmd/cache/cache_reload 2.37
64 TestFunctional/serial/CacheCmd/cache/delete 0.15
65 TestFunctional/serial/MinikubeKubectlCmd 0.52
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.64
67 TestFunctional/serial/ExtraConfig 29.27
68 TestFunctional/serial/ComponentHealth 0.05
69 TestFunctional/serial/LogsCmd 3.24
70 TestFunctional/serial/LogsFileCmd 3.36
72 TestFunctional/parallel/ConfigCmd 0.47
73 TestFunctional/parallel/DashboardCmd 8.91
74 TestFunctional/parallel/DryRun 1.51
75 TestFunctional/parallel/InternationalLanguage 0.63
76 TestFunctional/parallel/StatusCmd 1.38
79 TestFunctional/parallel/ServiceCmd 13.89
81 TestFunctional/parallel/AddonsCmd 0.29
82 TestFunctional/parallel/PersistentVolumeClaim 26.34
84 TestFunctional/parallel/SSHCmd 1.11
85 TestFunctional/parallel/CpCmd 1.76
86 TestFunctional/parallel/MySQL 20.33
87 TestFunctional/parallel/FileSync 0.58
88 TestFunctional/parallel/CertSync 3.01
92 TestFunctional/parallel/NodeLabels 0.06
94 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
97 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
99 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.21
100 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
101 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
105 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.62
107 TestFunctional/parallel/ProfileCmd/profile_list 0.53
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.63
109 TestFunctional/parallel/MountCmd/any-port 9.83
110 TestFunctional/parallel/MountCmd/specific-port 2.94
111 TestFunctional/parallel/DockerEnv/bash 2.03
112 TestFunctional/parallel/Version/short 0.12
113 TestFunctional/parallel/Version/components 0.69
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.96
119 TestFunctional/parallel/ImageCommands/Setup 1.95
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.77
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.44
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.29
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.73
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.65
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.93
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.88
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.23
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.66
130 TestFunctional/delete_addon-resizer_images 0.17
131 TestFunctional/delete_my-image_image 0.07
132 TestFunctional/delete_minikube_cached_images 0.07
142 TestJSONOutput/start/Command 40.55
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 0.74
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 0.68
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 12.45
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 0.76
167 TestKicCustomNetwork/create_custom_network 26.85
168 TestKicCustomNetwork/use_default_bridge_network 26.68
169 TestKicExistingNetwork 28.66
170 TestKicCustomSubnet 28.14
171 TestMainNoArgs 0.07
172 TestMinikubeProfile 57.72
175 TestMountStart/serial/StartWithMountFirst 7.03
176 TestMountStart/serial/VerifyMountFirst 0.44
177 TestMountStart/serial/StartWithMountSecond 7.42
178 TestMountStart/serial/VerifyMountSecond 0.45
179 TestMountStart/serial/DeleteFirst 2.4
180 TestMountStart/serial/VerifyMountPostDelete 0.43
181 TestMountStart/serial/Stop 1.61
182 TestMountStart/serial/RestartStopped 5.08
183 TestMountStart/serial/VerifyMountPostStop 0.43
186 TestMultiNode/serial/FreshStart2Nodes 71.72
187 TestMultiNode/serial/DeployApp2Nodes 6.07
188 TestMultiNode/serial/PingHostFrom2Pods 0.84
189 TestMultiNode/serial/AddNode 27
190 TestMultiNode/serial/ProfileList 0.52
191 TestMultiNode/serial/CopyFile 16.88
192 TestMultiNode/serial/StopNode 14.2
193 TestMultiNode/serial/StartAfterStop 25.3
194 TestMultiNode/serial/RestartKeepsNodes 119.18
195 TestMultiNode/serial/DeleteNode 18.98
196 TestMultiNode/serial/StopMultiNode 25.33
197 TestMultiNode/serial/RestartMultiNode 77.49
198 TestMultiNode/serial/ValidateNameConflict 29.38
204 TestScheduledStopUnix 98.46
205 TestSkaffold 57.4
207 TestInsufficientStorage 13.25
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 5.6
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 8.59
225 TestStoppedBinaryUpgrade/Setup 0.48
227 TestStoppedBinaryUpgrade/MinikubeLogs 3.81
229 TestPause/serial/Start 76.49
230 TestPause/serial/SecondStartNoReconfiguration 6.48
231 TestPause/serial/Pause 0.79
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.35
242 TestNoKubernetes/serial/StartWithK8s 26.13
243 TestNoKubernetes/serial/StartWithStopK8s 16.98
244 TestNoKubernetes/serial/Start 6.48
245 TestNoKubernetes/serial/VerifyK8sNotRunning 0.66
246 TestNoKubernetes/serial/ProfileList 1.08
247 TestNoKubernetes/serial/Stop 1.74
248 TestNoKubernetes/serial/StartNoArgs 4.42
249 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.5
250 TestNetworkPlugins/group/auto/Start 51.25
251 TestNetworkPlugins/group/kindnet/Start 47.59
252 TestNetworkPlugins/group/auto/KubeletFlags 0.5
253 TestNetworkPlugins/group/auto/NetCatPod 13.8
254 TestNetworkPlugins/group/auto/DNS 0.13
255 TestNetworkPlugins/group/auto/Localhost 0.1
256 TestNetworkPlugins/group/auto/HairPin 5.11
257 TestNetworkPlugins/group/cilium/Start 79.76
258 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
259 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
260 TestNetworkPlugins/group/kindnet/NetCatPod 12.77
261 TestNetworkPlugins/group/kindnet/DNS 0.16
262 TestNetworkPlugins/group/kindnet/Localhost 0.13
263 TestNetworkPlugins/group/kindnet/HairPin 0.14
264 TestNetworkPlugins/group/calico/Start 70.24
265 TestNetworkPlugins/group/cilium/ControllerPod 5.02
266 TestNetworkPlugins/group/cilium/KubeletFlags 0.49
267 TestNetworkPlugins/group/cilium/NetCatPod 12.43
268 TestNetworkPlugins/group/cilium/DNS 0.15
269 TestNetworkPlugins/group/cilium/Localhost 0.13
270 TestNetworkPlugins/group/cilium/HairPin 0.13
271 TestNetworkPlugins/group/false/Start 51.65
272 TestNetworkPlugins/group/calico/ControllerPod 5.02
273 TestNetworkPlugins/group/calico/KubeletFlags 0.49
274 TestNetworkPlugins/group/calico/NetCatPod 11.85
275 TestNetworkPlugins/group/calico/DNS 0.13
276 TestNetworkPlugins/group/calico/Localhost 0.12
277 TestNetworkPlugins/group/calico/HairPin 0.14
278 TestNetworkPlugins/group/bridge/Start 41.23
279 TestNetworkPlugins/group/false/KubeletFlags 0.51
280 TestNetworkPlugins/group/false/NetCatPod 11.65
281 TestNetworkPlugins/group/false/DNS 0.11
282 TestNetworkPlugins/group/false/Localhost 0.1
283 TestNetworkPlugins/group/false/HairPin 5.11
284 TestNetworkPlugins/group/enable-default-cni/Start 41.54
285 TestNetworkPlugins/group/bridge/KubeletFlags 0.51
286 TestNetworkPlugins/group/bridge/NetCatPod 13.08
287 TestNetworkPlugins/group/bridge/DNS 0.12
288 TestNetworkPlugins/group/bridge/Localhost 0.11
289 TestNetworkPlugins/group/bridge/HairPin 0.12
290 TestNetworkPlugins/group/kubenet/Start 52.16
291 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
292 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.3
293 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
294 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
295 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
298 TestNetworkPlugins/group/kubenet/KubeletFlags 0.46
299 TestNetworkPlugins/group/kubenet/NetCatPod 12.68
300 TestNetworkPlugins/group/kubenet/DNS 0.13
301 TestNetworkPlugins/group/kubenet/Localhost 0.1
302 TestNetworkPlugins/group/kubenet/HairPin 0.11
304 TestStartStop/group/embed-certs/serial/FirstStart 40.58
305 TestStartStop/group/embed-certs/serial/DeployApp 10.83
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.72
307 TestStartStop/group/embed-certs/serial/Stop 12.57
308 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
309 TestStartStop/group/embed-certs/serial/SecondStart 332.85
312 TestStartStop/group/old-k8s-version/serial/Stop 1.64
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.33
315 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.02
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.66
317 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.47
320 TestStartStop/group/no-preload/serial/FirstStart 51.49
321 TestStartStop/group/no-preload/serial/DeployApp 9.76
322 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.75
323 TestStartStop/group/no-preload/serial/Stop 12.56
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
325 TestStartStop/group/no-preload/serial/SecondStart 330.57
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 15.02
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.65
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.47
332 TestStartStop/group/default-k8s-different-port/serial/FirstStart 41.57
333 TestStartStop/group/default-k8s-different-port/serial/DeployApp 9.83
334 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.71
335 TestStartStop/group/default-k8s-different-port/serial/Stop 12.61
336 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.33
337 TestStartStop/group/default-k8s-different-port/serial/SecondStart 334.38
338 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 13.02
339 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.6
340 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.48
344 TestStartStop/group/newest-cni/serial/FirstStart 38.81
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
347 TestStartStop/group/newest-cni/serial/Stop 12.6
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.33
349 TestStartStop/group/newest-cni/serial/SecondStart 18.7
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.55
x
+
TestDownloadOnly/v1.16.0/json-events (13.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220601031933-2342 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220601031933-2342 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (13.247451943s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (13.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220601031933-2342
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220601031933-2342: exit status 85 (616.233732ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 03:19:33
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 03:19:33.415856    2354 out.go:296] Setting OutFile to fd 1 ...
	I0601 03:19:33.416075    2354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:19:33.416080    2354 out.go:309] Setting ErrFile to fd 2...
	I0601 03:19:33.416084    2354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:19:33.416196    2354 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	W0601 03:19:33.416290    2354 root.go:300] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: no such file or directory
	I0601 03:19:33.416746    2354 out.go:303] Setting JSON to true
	I0601 03:19:33.432152    2354 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1143,"bootTime":1654077630,"procs":345,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 03:19:33.432263    2354 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 03:19:33.455482    2354 out.go:97] [download-only-20220601031933-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 03:19:33.455632    2354 notify.go:193] Checking for updates...
	W0601 03:19:33.455664    2354 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball: no such file or directory
	I0601 03:19:33.476754    2354 out.go:169] MINIKUBE_LOCATION=14079
	I0601 03:19:33.519952    2354 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 03:19:33.561914    2354 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 03:19:33.583202    2354 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 03:19:33.604907    2354 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	W0601 03:19:33.646845    2354 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0601 03:19:33.647171    2354 driver.go:358] Setting default libvirt URI to qemu:///system
	W0601 03:19:33.710944    2354 docker.go:113] docker version returned error: exit status 1
	I0601 03:19:33.731849    2354 out.go:97] Using the docker driver based on user configuration
	I0601 03:19:33.731870    2354 start.go:284] selected driver: docker
	I0601 03:19:33.731878    2354 start.go:806] validating driver "docker" against <nil>
	I0601 03:19:33.731969    2354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 03:19:33.858451    2354 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 03:19:33.879989    2354 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0601 03:19:33.901008    2354 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0601 03:19:33.942855    2354 out.go:169] 
	W0601 03:19:33.963959    2354 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0601 03:19:33.984685    2354 out.go:169] 
	I0601 03:19:34.026887    2354 out.go:169] 
	W0601 03:19:34.047625    2354 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0601 03:19:34.047737    2354 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0601 03:19:34.047772    2354 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0601 03:19:34.068820    2354 out.go:169] 
	I0601 03:19:34.090056    2354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 03:19:34.229070    2354 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0601 03:19:34.250712    2354 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0601 03:19:34.250782    2354 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 03:19:34.296842    2354 out.go:169] 
	W0601 03:19:34.317813    2354 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0601 03:19:34.317987    2354 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0601 03:19:34.318066    2354 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0601 03:19:34.338798    2354 out.go:169] 
	I0601 03:19:34.380656    2354 out.go:169] 
	W0601 03:19:34.401995    2354 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0601 03:19:34.422799    2354 out.go:169] 
	I0601 03:19:34.443621    2354 start_flags.go:373] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0601 03:19:34.443861    2354 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 03:19:34.464966    2354 out.go:169] Using Docker Desktop driver with the root privilege
	I0601 03:19:34.485812    2354 cni.go:95] Creating CNI manager for ""
	I0601 03:19:34.485855    2354 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 03:19:34.485866    2354 start_flags.go:306] config:
	{Name:download-only-20220601031933-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220601031933-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 03:19:34.506621    2354 out.go:97] Starting control plane node download-only-20220601031933-2342 in cluster download-only-20220601031933-2342
	I0601 03:19:34.506664    2354 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 03:19:34.527808    2354 out.go:97] Pulling base image ...
	I0601 03:19:34.527839    2354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 03:19:34.527882    2354 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 03:19:34.528010    2354 cache.go:107] acquiring lock: {Name:mk6cdcb3277425415932624173a7b7ca3460ec43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:19:34.528046    2354 cache.go:107] acquiring lock: {Name:mk99bce95ec967b726e3ff0a90815665a40bf92c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:19:34.528076    2354 cache.go:107] acquiring lock: {Name:mk1fdebabd249df2c95ca9aee478c864fc9fedb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:19:34.529012    2354 cache.go:107] acquiring lock: {Name:mkd4ab2ddc2685773db76c838c20a9351e0307a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:19:34.529031    2354 cache.go:107] acquiring lock: {Name:mkf320d7f6611835c267f8cce3e7f02a47355ce7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:19:34.529048    2354 cache.go:107] acquiring lock: {Name:mke9682f7104cb996761b665029d07d9a54b9ec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:19:34.529093    2354 cache.go:107] acquiring lock: {Name:mkc5e356ac8cf3175bd703495d37f88a836dd612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:19:34.529090    2354 cache.go:107] acquiring lock: {Name:mk17e5d41685a9c9bfe40771fa97f28d234f06eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 03:19:34.530004    2354 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 03:19:34.529544    2354 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0601 03:19:34.529998    2354 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0601 03:19:34.529488    2354 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0601 03:19:34.530045    2354 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0601 03:19:34.530059    2354 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/download-only-20220601031933-2342/config.json ...
	I0601 03:19:34.529491    2354 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0601 03:19:34.529868    2354 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0601 03:19:34.530094    2354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/download-only-20220601031933-2342/config.json: {Name:mkd1e556742786a8191d7c9e1b719cf8f08c00b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 03:19:34.529981    2354 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0601 03:19:34.530460    2354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 03:19:34.530881    2354 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0601 03:19:34.530879    2354 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0601 03:19:34.530878    2354 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0601 03:19:34.536606    2354 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 03:19:34.538892    2354 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 03:19:34.539506    2354 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 03:19:34.539678    2354 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 03:19:34.540295    2354 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 03:19:34.540575    2354 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 03:19:34.541253    2354 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 03:19:34.541891    2354 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 03:19:34.600588    2354 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 03:19:34.600825    2354 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 03:19:34.600945    2354 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 03:19:35.084738    2354 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0601 03:19:35.088259    2354 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0601 03:19:35.090163    2354 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0601 03:19:35.090660    2354 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0601 03:19:35.110805    2354 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0601 03:19:35.112238    2354 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0601 03:19:35.136200    2354 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0601 03:19:35.181729    2354 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0601 03:19:35.181748    2354 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 652.888623ms
	I0601 03:19:35.181758    2354 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0601 03:19:35.240757    2354 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0601 03:19:35.493794    2354 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0601 03:19:35.493811    2354 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 965.807432ms
	I0601 03:19:35.493821    2354 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0601 03:19:35.673532    2354 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0601 03:19:35.673548    2354 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 1.144694573s
	I0601 03:19:35.673557    2354 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0601 03:19:35.704806    2354 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0601 03:19:36.012665    2354 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0601 03:19:36.012683    2354 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 1.484601597s
	I0601 03:19:36.012692    2354 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0601 03:19:36.105292    2354 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0601 03:19:36.105330    2354 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 1.577295031s
	I0601 03:19:36.105338    2354 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0601 03:19:36.256777    2354 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0601 03:19:36.256798    2354 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 1.728777666s
	I0601 03:19:36.256808    2354 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0601 03:19:36.298584    2354 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0601 03:19:36.298613    2354 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 1.769598403s
	I0601 03:19:36.298631    2354 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0601 03:19:36.615278    2354 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0601 03:19:36.615294    2354 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 2.087195287s
	I0601 03:19:36.615302    2354 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0601 03:19:36.615315    2354 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220601031933-2342"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (5.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220601031933-2342 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220601031933-2342 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker : (5.758326958s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (5.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
--- PASS: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220601031933-2342
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220601031933-2342: exit status 85 (289.387098ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 03:19:47
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 03:19:47.519353    2412 out.go:296] Setting OutFile to fd 1 ...
	I0601 03:19:47.519578    2412 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:19:47.519583    2412 out.go:309] Setting ErrFile to fd 2...
	I0601 03:19:47.519588    2412 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:19:47.519741    2412 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	W0601 03:19:47.519855    2412 root.go:300] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: no such file or directory
	I0601 03:19:47.520035    2412 out.go:303] Setting JSON to true
	I0601 03:19:47.538596    2412 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1157,"bootTime":1654077630,"procs":355,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 03:19:47.538737    2412 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 03:19:47.560543    2412 out.go:97] [download-only-20220601031933-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 03:19:47.560661    2412 notify.go:193] Checking for updates...
	W0601 03:19:47.560668    2412 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball: no such file or directory
	I0601 03:19:47.581394    2412 out.go:169] MINIKUBE_LOCATION=14079
	I0601 03:19:47.655464    2412 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 03:19:47.730879    2412 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 03:19:47.773429    2412 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 03:19:47.830488    2412 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	W0601 03:19:47.904599    2412 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0601 03:19:47.905228    2412 config.go:178] Loaded profile config "download-only-20220601031933-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0601 03:19:47.905316    2412 start.go:714] api.Load failed for download-only-20220601031933-2342: filestore "download-only-20220601031933-2342": Docker machine "download-only-20220601031933-2342" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0601 03:19:47.905386    2412 driver.go:358] Setting default libvirt URI to qemu:///system
	W0601 03:19:47.905416    2412 start.go:714] api.Load failed for download-only-20220601031933-2342: filestore "download-only-20220601031933-2342": Docker machine "download-only-20220601031933-2342" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0601 03:19:47.992501    2412 docker.go:113] docker version returned error: exit status 1
	I0601 03:19:48.031759    2412 out.go:97] Using the docker driver based on existing profile
	I0601 03:19:48.031796    2412 start.go:284] selected driver: docker
	I0601 03:19:48.031839    2412 start.go:806] validating driver "docker" against &{Name:download-only-20220601031933-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220601031933-2342 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 03:19:48.032240    2412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 03:19:48.170878    2412 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 03:19:48.209305    2412 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0601 03:19:48.267437    2412 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0601 03:19:48.363401    2412 out.go:169] 
	W0601 03:19:48.400810    2412 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0601 03:19:48.438596    2412 out.go:169] 
	I0601 03:19:48.551436    2412 out.go:169] 
	W0601 03:19:48.572406    2412 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0601 03:19:48.572503    2412 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0601 03:19:48.572554    2412 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0601 03:19:48.609330    2412 out.go:169] 
	I0601 03:19:48.647162    2412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 03:19:48.828122    2412 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:43 SystemTime:2022-06-01 10:19:48.757084379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 03:19:48.830203    2412 cni.go:95] Creating CNI manager for ""
	I0601 03:19:48.830220    2412 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 03:19:48.830235    2412 start_flags.go:306] config:
	{Name:download-only-20220601031933-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:download-only-20220601031933-2342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 03:19:48.854470    2412 out.go:97] Starting control plane node download-only-20220601031933-2342 in cluster download-only-20220601031933-2342
	I0601 03:19:48.854574    2412 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 03:19:48.876583    2412 out.go:97] Pulling base image ...
	I0601 03:19:48.876685    2412 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 03:19:48.876774    2412 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 03:19:48.942075    2412 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 03:19:48.942225    2412 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 03:19:48.942241    2412 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory, skipping pull
	I0601 03:19:48.942245    2412 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in cache, skipping pull
	I0601 03:19:48.942252    2412 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a as a tarball
	I0601 03:19:48.949585    2412 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 03:19:48.949598    2412 cache.go:57] Caching tarball of preloaded images
	I0601 03:19:48.949780    2412 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 03:19:48.971414    2412 out.go:97] Downloading Kubernetes v1.23.6 preload ...
	I0601 03:19:48.971444    2412 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0601 03:19:49.065454    2412 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4?checksum=md5:a6c3f222f3cce2a88e27e126d64eb717 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 03:19:51.916449    2412 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0601 03:19:51.916617    2412 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220601031933-2342"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.77s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.77s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220601031933-2342
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                    
x
+
TestDownloadOnlyKic (4.33s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220601031955-2342 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220601031955-2342 --force --alsologtostderr --driver=docker : (3.174428818s)
helpers_test.go:175: Cleaning up "download-docker-20220601031955-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220601031955-2342
--- PASS: TestDownloadOnlyKic (4.33s)

                                                
                                    
x
+
TestBinaryMirror (1.67s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220601031959-2342 --alsologtostderr --binary-mirror http://127.0.0.1:49608 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220601031959-2342 --alsologtostderr --binary-mirror http://127.0.0.1:49608 --driver=docker : (1.006648679s)
helpers_test.go:175: Cleaning up "binary-mirror-20220601031959-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220601031959-2342
--- PASS: TestBinaryMirror (1.67s)

                                                
                                    
x
+
TestOffline (46.83s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220601035306-2342 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220601035306-2342 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (43.695059289s)
helpers_test.go:175: Cleaning up "offline-docker-20220601035306-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220601035306-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220601035306-2342: (3.137614648s)
--- PASS: TestOffline (46.83s)

                                                
                                    
x
+
TestAddons/Setup (107.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220601032001-2342 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220601032001-2342 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m47.853395436s)
--- PASS: TestAddons/Setup (107.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 2.834281ms
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-bd6f4dd56-kqkb9" [a1e7ea48-6fd4-412d-b265-cea40c260400] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009159417s
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220601032001-2342 top pods -n kube-system
addons_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220601032001-2342 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.24s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 13.960472ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-6d67d5465d-bbwht" [c0406e37-2230-4a28-a614-39a5fb6a1b47] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01218253s
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220601032001-2342 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220601032001-2342 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.699434076s)
addons_test.go:440: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220601032001-2342 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.24s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 5.186336ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220601032001-2342 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601032001-2342 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220601032001-2342 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [77c0824f-5c0c-4dc2-aa41-741fc7bec41d] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [77c0824f-5c0c-4dc2-aa41-741fc7bec41d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [77c0824f-5c0c-4dc2-aa41-741fc7bec41d] Running
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.011717713s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220601032001-2342 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220601032001-2342 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220601032001-2342 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220601032001-2342 delete pod task-pv-pod
addons_test.go:544: (dbg) Done: kubectl --context addons-20220601032001-2342 delete pod task-pv-pod: (1.129044417s)
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220601032001-2342 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220601032001-2342 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601032001-2342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220601032001-2342 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [cdaf76f2-5fb9-4c9c-b2fb-9d3a0d844ea7] Pending
helpers_test.go:342: "task-pv-pod-restore" [cdaf76f2-5fb9-4c9c-b2fb-9d3a0d844ea7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [cdaf76f2-5fb9-4c9c-b2fb-9d3a0d844ea7] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 14.010042285s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220601032001-2342 delete pod task-pv-pod-restore
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220601032001-2342 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220601032001-2342 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220601032001-2342 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220601032001-2342 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.849791323s)
addons_test.go:592: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220601032001-2342 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (15.36s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220601032001-2342 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [99614faf-d8c6-473c-a732-219d02257ce9] Pending
helpers_test.go:342: "busybox" [99614faf-d8c6-473c-a732-219d02257ce9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [99614faf-d8c6-473c-a732-219d02257ce9] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.007776265s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220601032001-2342 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:628: (dbg) Run:  kubectl --context addons-20220601032001-2342 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220601032001-2342 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220601032001-2342 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220601032001-2342 addons disable gcp-auth --alsologtostderr -v=1: (5.861981134s)
--- PASS: TestAddons/serial/GCPAuth (15.36s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.04s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220601032001-2342
addons_test.go:132: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220601032001-2342: (12.654461384s)
addons_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220601032001-2342
addons_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220601032001-2342
--- PASS: TestAddons/StoppedEnableDisable (13.04s)

                                                
                                    
x
+
TestCertOptions (30.69s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220601035748-2342 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0601 03:57:50.859320    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220601035748-2342 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (26.813218366s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220601035748-2342 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220601035748-2342 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220601035748-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220601035748-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220601035748-2342: (2.835995405s)
--- PASS: TestCertOptions (30.69s)

                                                
                                    
x
+
TestCertExpiration (215.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220601035425-2342 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220601035425-2342 --memory=2048 --cert-expiration=3m --driver=docker : (25.729742501s)
E0601 03:55:49.611464    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 03:56:49.047619    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:57:40.615442    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
E0601 03:57:40.620890    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
E0601 03:57:40.633048    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
E0601 03:57:40.653606    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
E0601 03:57:40.694124    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
E0601 03:57:40.774781    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
E0601 03:57:40.934953    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
E0601 03:57:41.255166    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
E0601 03:57:41.895362    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
E0601 03:57:43.175578    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220601035425-2342 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220601035425-2342 --memory=2048 --cert-expiration=8760h --driver=docker : (6.54957681s)
helpers_test.go:175: Cleaning up "cert-expiration-20220601035425-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220601035425-2342
E0601 03:58:01.101385    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220601035425-2342: (3.14020376s)
--- PASS: TestCertExpiration (215.42s)

                                                
                                    
x
+
TestDockerFlags (27.18s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220601035358-2342 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220601035358-2342 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (23.495383266s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220601035358-2342 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220601035358-2342 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220601035358-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220601035358-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220601035358-2342: (2.76742087s)
--- PASS: TestDockerFlags (27.18s)

                                                
                                    
x
+
TestForceSystemdFlag (234.75s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220601035353-2342 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220601035353-2342 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (3m51.30625186s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220601035353-2342 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220601035353-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220601035353-2342
E0601 03:57:45.738172    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220601035353-2342: (2.914474623s)
--- PASS: TestForceSystemdFlag (234.75s)

                                                
                                    
x
+
TestForceSystemdEnv (30.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220601035327-2342 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220601035327-2342 --memory=2048 --alsologtostderr -v=5 --driver=docker : (26.551574061s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220601035327-2342 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220601035327-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220601035327-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220601035327-2342: (3.432494029s)
--- PASS: TestForceSystemdEnv (30.67s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (5.57s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (5.57s)

                                                
                                    
x
+
TestErrorSpam/setup (25.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220601032324-2342 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220601032324-2342 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 --driver=docker : (25.119301534s)
--- PASS: TestErrorSpam/setup (25.12s)

                                                
                                    
x
+
TestErrorSpam/start (2.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 start --dry-run
--- PASS: TestErrorSpam/start (2.37s)

                                                
                                    
x
+
TestErrorSpam/status (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 status
--- PASS: TestErrorSpam/status (1.35s)

                                                
                                    
x
+
TestErrorSpam/pause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 pause
--- PASS: TestErrorSpam/pause (1.91s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.01s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 unpause
--- PASS: TestErrorSpam/unpause (2.01s)

                                                
                                    
x
+
TestErrorSpam/stop (13.21s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 stop: (12.532308861s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601032324-2342 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601032324-2342 stop
--- PASS: TestErrorSpam/stop (13.21s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/test/nested/copy/2342/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220601032413-2342 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2160: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220601032413-2342 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (42.019978887s)
--- PASS: TestFunctional/serial/StartWithProxy (42.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.39s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220601032413-2342 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220601032413-2342 --alsologtostderr -v=8: (6.391000202s)
functional_test.go:655: soft start took 6.391638231s for "functional-20220601032413-2342" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.39s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220601032413-2342 get po -A
functional_test.go:688: (dbg) Done: kubectl --context functional-20220601032413-2342 get po -A: (1.473945072s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 cache add k8s.gcr.io/pause:3.1: (1.080350733s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 cache add k8s.gcr.io/pause:3.3: (1.540589101s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 cache add k8s.gcr.io/pause:latest: (1.434262763s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220601032413-2342 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local418077054/001
functional_test.go:1081: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 cache add minikube-local-cache-test:functional-20220601032413-2342
functional_test.go:1081: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 cache add minikube-local-cache-test:functional-20220601032413-2342: (1.320349625s)
functional_test.go:1086: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 cache delete minikube-local-cache-test:functional-20220601032413-2342
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220601032413-2342
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (435.684515ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 cache reload: (1.002440582s)
functional_test.go:1155: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 kubectl -- --context functional-20220601032413-2342 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220601032413-2342 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.64s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220601032413-2342 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220601032413-2342 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.270455906s)
functional_test.go:753: restart took 29.270622336s for "functional-20220601032413-2342" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (29.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220601032413-2342 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 logs
functional_test.go:1228: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 logs: (3.23774804s)
--- PASS: TestFunctional/serial/LogsCmd (3.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3817366938/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3817366938/001/logs.txt: (3.360135754s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601032413-2342 config get cpus: exit status 14 (56.339844ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601032413-2342 config get cpus: exit status 14 (53.533414ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220601032413-2342 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220601032413-2342 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 3987: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220601032413-2342 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220601032413-2342 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (732.296931ms)

                                                
                                                
-- stdout --
	* [functional-20220601032413-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 03:26:26.387143    3891 out.go:296] Setting OutFile to fd 1 ...
	I0601 03:26:26.387394    3891 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:26:26.387401    3891 out.go:309] Setting ErrFile to fd 2...
	I0601 03:26:26.387409    3891 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:26:26.387538    3891 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 03:26:26.387878    3891 out.go:303] Setting JSON to false
	I0601 03:26:26.405453    3891 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1556,"bootTime":1654077630,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 03:26:26.405601    3891 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 03:26:26.427553    3891 out.go:177] * [functional-20220601032413-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 03:26:26.469427    3891 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 03:26:26.511382    3891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 03:26:26.553305    3891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 03:26:26.595507    3891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 03:26:26.638444    3891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 03:26:26.659850    3891 config.go:178] Loaded profile config "functional-20220601032413-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 03:26:26.660482    3891 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 03:26:26.739573    3891 docker.go:137] docker version: linux-20.10.14
	I0601 03:26:26.739718    3891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 03:26:26.871652    3891 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 10:26:26.804889868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 03:26:26.914524    3891 out.go:177] * Using the docker driver based on existing profile
	I0601 03:26:26.951403    3891 start.go:284] selected driver: docker
	I0601 03:26:26.951434    3891 start.go:806] validating driver "docker" against &{Name:functional-20220601032413-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601032413-2342 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 03:26:26.951581    3891 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 03:26:26.975484    3891 out.go:177] 
	W0601 03:26:26.996656    3891 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0601 03:26:27.018279    3891 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220601032413-2342 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220601032413-2342 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220601032413-2342 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (630.333234ms)

                                                
                                                
-- stdout --
	* [functional-20220601032413-2342] minikube v1.26.0-beta.1 sur Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 03:26:25.746420    3872 out.go:296] Setting OutFile to fd 1 ...
	I0601 03:26:25.746586    3872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:26:25.746591    3872 out.go:309] Setting ErrFile to fd 2...
	I0601 03:26:25.746594    3872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:26:25.746702    3872 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 03:26:25.746918    3872 out.go:303] Setting JSON to false
	I0601 03:26:25.763329    3872 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1555,"bootTime":1654077630,"procs":345,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 03:26:25.763433    3872 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 03:26:25.788345    3872 out.go:177] * [functional-20220601032413-2342] minikube v1.26.0-beta.1 sur Darwin 12.4
	I0601 03:26:25.830490    3872 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 03:26:25.852501    3872 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 03:26:25.874188    3872 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 03:26:25.895411    3872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 03:26:25.917271    3872 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 03:26:25.938520    3872 config.go:178] Loaded profile config "functional-20220601032413-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 03:26:25.939276    3872 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 03:26:26.013533    3872 docker.go:137] docker version: linux-20.10.14
	I0601 03:26:26.013663    3872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 03:26:26.154045    3872 info.go:265] docker info: {ID:OSPM:CVZQ:HET7:TQWM:FDTX:UXYA:EVZD:IXWP:AUUF:VCIR:LE27:YQJQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 10:26:26.086339579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 03:26:26.196919    3872 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0601 03:26:26.218858    3872 start.go:284] selected driver: docker
	I0601 03:26:26.218880    3872 start.go:806] validating driver "docker" against &{Name:functional-20220601032413-2342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601032413-2342 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 03:26:26.219073    3872 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 03:26:26.243722    3872 out.go:177] 
	W0601 03:26:26.264975    3872 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0601 03:26:26.285523    3872 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 status
functional_test.go:852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:864: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220601032413-2342 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220601032413-2342 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-z7vd2" [3c0d8976-2f13-4393-ac93-a8d221ca9424] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-z7vd2" [3c0d8976-2f13-4393-ac93-a8d221ca9424] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 7.008187797s
functional_test.go:1448: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 service --namespace=default --https --url hello-node: (2.029380452s)
functional_test.go:1475: found endpoint: https://127.0.0.1:51838
functional_test.go:1490: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1490: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 service hello-node --url --format={{.IP}}: (2.026626239s)
functional_test.go:1504: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 service hello-node --url: (2.025743596s)
functional_test.go:1510: found endpoint for hello-node: http://127.0.0.1:51967
--- PASS: TestFunctional/parallel/ServiceCmd (13.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 addons list
functional_test.go:1631: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [5701502b-e874-4487-ad0b-449fa1df3411] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009337903s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220601032413-2342 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220601032413-2342 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220601032413-2342 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220601032413-2342 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [ad63f32d-e254-47a8-8f25-14c71818382b] Pending
helpers_test.go:342: "sp-pod" [ad63f32d-e254-47a8-8f25-14c71818382b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [ad63f32d-e254-47a8-8f25-14c71818382b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.009999757s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220601032413-2342 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220601032413-2342 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220601032413-2342 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [c185aea0-d252-48c3-a744-80475036a9ca] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [c185aea0-d252-48c3-a744-80475036a9ca] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [c185aea0-d252-48c3-a744-80475036a9ca] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008923016s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220601032413-2342 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.34s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh -n functional-20220601032413-2342 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 cp functional-20220601032413-2342:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd881843138/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh -n functional-20220601032413-2342 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220601032413-2342 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-b87c45988-dh9rb" [0fa6ca60-da49-44bb-9ae8-41a85c233c5c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-dh9rb" [0fa6ca60-da49-44bb-9ae8-41a85c233c5c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.010206739s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601032413-2342 exec mysql-b87c45988-dh9rb -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601032413-2342 exec mysql-b87c45988-dh9rb -- mysql -ppassword -e "show databases;": exit status 1 (117.084099ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601032413-2342 exec mysql-b87c45988-dh9rb -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601032413-2342 exec mysql-b87c45988-dh9rb -- mysql -ppassword -e "show databases;": exit status 1 (110.306593ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601032413-2342 exec mysql-b87c45988-dh9rb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/2342/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "sudo cat /etc/test/nested/copy/2342/hosts"
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/2342.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "sudo cat /etc/ssl/certs/2342.pem"
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/2342.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "sudo cat /usr/share/ca-certificates/2342.pem"
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/23422.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "sudo cat /etc/ssl/certs/23422.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/23422.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "sudo cat /usr/share/ca-certificates/23422.pem"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220601032413-2342 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "sudo systemctl is-active crio"
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "sudo systemctl is-active crio": exit status 1 (461.915328ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220601032413-2342 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220601032413-2342 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [2ac24167-b34f-45f7-9618-3177c03caf04] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [2ac24167-b34f-45f7-9618-3177c03caf04] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.016138225s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601032413-2342 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220601032413-2342 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 3659: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "452.854523ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1324: Took "77.732267ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1361: Took "515.240718ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1374: Took "118.839373ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220601032413-2342 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port307530758/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1654079178342130000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port307530758/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1654079178342130000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port307530758/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1654079178342130000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port307530758/001/test-1654079178342130000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (478.049756ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  1 10:26 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  1 10:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  1 10:26 test-1654079178342130000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh cat /mount-9p/test-1654079178342130000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220601032413-2342 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [3f3acdb5-0e52-4b4f-9929-13c4c9b5d568] Pending
helpers_test.go:342: "busybox-mount" [3f3acdb5-0e52-4b4f-9929-13c4c9b5d568] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [3f3acdb5-0e52-4b4f-9929-13c4c9b5d568] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [3f3acdb5-0e52-4b4f-9929-13c4c9b5d568] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007395364s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220601032413-2342 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220601032413-2342 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port307530758/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220601032413-2342 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3945068516/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (551.17551ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220601032413-2342 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3945068516/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh "sudo umount -f /mount-9p": exit status 1 (419.778027ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220601032413-2342 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3945068516/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.94s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220601032413-2342 docker-env) && out/minikube-darwin-amd64 status -p functional-20220601032413-2342"
functional_test.go:491: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220601032413-2342 docker-env) && out/minikube-darwin-amd64 status -p functional-20220601032413-2342": (1.236023416s)
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220601032413-2342 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls --format short
E0601 03:26:59.256019    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220601032413-2342
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220601032413-2342
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-20220601032413-2342 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-20220601032413-2342 | 52345adbb6da9 | 30B    |
| docker.io/library/mysql                     | 5.7                            | 2a0961b7de03c | 462MB  |
| docker.io/library/nginx                     | alpine                         | b1c3acb288825 | 23.4MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.6                        | 8fa62c12256df | 135MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.6                        | 595f327f224a4 | 53.5MB |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| k8s.gcr.io/kube-proxy                       | v1.23.6                        | 4c03754524064 | 112MB  |
| docker.io/kubernetesui/dashboard            | <none>                         | 7fff914c4a615 | 243MB  |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                     | latest                         | 0e901e68141fd | 142MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.6                        | df7b72818ad2e | 125MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>                         | 7801cfc6d5c07 | 34.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls --format json:
[{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"34400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.6"],"size":"125000000"},{"id":"8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271
c2f19326a705342c3b6","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.6"],"size":"135000000"},{"id":"7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"243000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"]
,"size":"23400000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.6"],"size":"112000000"},{"id":"595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.6"],"size":"53500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220601032413-2342"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"52345adbb6da917c43153da72c292eb48adbc94e25b2684879441e4ab50b4edc","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220601032413-2342"],"size":"30
"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls --format yaml:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.6
size: "53500000"
- id: 4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.6
size: "112000000"
- id: 8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.6
size: "135000000"
- id: 7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "243000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "34400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220601032413-2342
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.6
size: "125000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 52345adbb6da917c43153da72c292eb48adbc94e25b2684879441e4ab50b4edc
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220601032413-2342
size: "30"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601032413-2342 ssh pgrep buildkitd: exit status 1 (429.956664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image build -t localhost/my-image:functional-20220601032413-2342 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 image build -t localhost/my-image:functional-20220601032413-2342 testdata/build: (2.200878618s)
functional_test.go:315: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220601032413-2342 image build -t localhost/my-image:functional-20220601032413-2342 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in de3996f66398
Removing intermediate container de3996f66398
---> 430f23233d6e
Step 3/3 : ADD content.txt /
---> 7441fc485f6b
Successfully built 7441fc485f6b
Successfully tagged localhost/my-image:functional-20220601032413-2342
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.87642302s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220601032413-2342
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601032413-2342
2022/06/01 03:26:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601032413-2342: (3.402291914s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601032413-2342

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601032413-2342: (2.392783257s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220601032413-2342
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601032413-2342
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601032413-2342: (4.62306623s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image save gcr.io/google-containers/addon-resizer:functional-20220601032413-2342 /Users/jenkins/workspace/addon-resizer-save.tar
E0601 03:26:49.013513    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:26:49.020590    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:26:49.030699    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:26:49.052837    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:26:49.092996    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:26:49.173147    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 03:26:49.333459    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
functional_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 image save gcr.io/google-containers/addon-resizer:functional-20220601032413-2342 /Users/jenkins/workspace/addon-resizer-save.tar: (1.928665703s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image rm gcr.io/google-containers/addon-resizer:functional-20220601032413-2342
E0601 03:26:49.653797    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls
E0601 03:26:50.294510    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image load /Users/jenkins/workspace/addon-resizer-save.tar
E0601 03:26:51.574796    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
functional_test.go:404: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.885295491s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220601032413-2342
functional_test.go:419: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601032413-2342 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220601032413-2342
E0601 03:26:54.135055    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
functional_test.go:419: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601032413-2342 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220601032413-2342: (2.523480301s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220601032413-2342
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.66s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220601032413-2342
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220601032413-2342
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220601032413-2342
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220601033421-2342 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220601033421-2342 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (40.549575412s)
--- PASS: TestJSONOutput/start/Command (40.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220601033421-2342 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220601033421-2342 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.45s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220601033421-2342 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220601033421-2342 --output=json --user=testUser: (12.450794223s)
--- PASS: TestJSONOutput/stop/Command (12.45s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220601033518-2342 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220601033518-2342 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (325.342925ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"866dc896-23db-41fb-8162-1a12a7648370","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220601033518-2342] minikube v1.26.0-beta.1 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"30c176c4-67ab-419d-81af-a263d4540b7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"c782fa25-7bd6-49ee-a969-3c39b1499efb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig"}}
	{"specversion":"1.0","id":"24431623-39a1-41d0-8f4f-0d42d1629d05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"cb023cc5-c88e-436c-9ae2-fe785a1605a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ca315fff-1239-4b9c-875c-f17a3c7b3013","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube"}}
	{"specversion":"1.0","id":"c01dff24-61c6-4fc4-82f8-3ff3a76d79bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220601033518-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220601033518-2342
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220601033519-2342 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220601033519-2342 --network=: (24.036289447s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220601033519-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220601033519-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220601033519-2342: (2.743144828s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.85s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220601033545-2342 --network=bridge
E0601 03:35:49.584225    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220601033545-2342 --network=bridge: (23.890301544s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220601033545-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220601033545-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220601033545-2342: (2.724098373s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.68s)

                                                
                                    
x
+
TestKicExistingNetwork (28.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220601033612-2342 --network=existing-network
E0601 03:36:17.300148    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220601033612-2342 --network=existing-network: (25.513303646s)
helpers_test.go:175: Cleaning up "existing-network-20220601033612-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220601033612-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220601033612-2342: (2.733979345s)
--- PASS: TestKicExistingNetwork (28.66s)

                                                
                                    
x
+
TestKicCustomSubnet (28.14s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220601033641-2342 --subnet=192.168.60.0/24
E0601 03:36:49.036117    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220601033641-2342 --subnet=192.168.60.0/24: (25.340929944s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220601033641-2342 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220601033641-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220601033641-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220601033641-2342: (2.735846786s)
--- PASS: TestKicCustomSubnet (28.14s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (57.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-20220601033709-2342 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-20220601033709-2342 --driver=docker : (24.003413444s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-20220601033709-2342 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-20220601033709-2342 --driver=docker : (25.917315279s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-20220601033709-2342
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-20220601033709-2342
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220601033709-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-20220601033709-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-20220601033709-2342: (2.880496615s)
helpers_test.go:175: Cleaning up "first-20220601033709-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-20220601033709-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-20220601033709-2342: (2.876210309s)
--- PASS: TestMinikubeProfile (57.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220601033807-2342 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220601033807-2342 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.033728742s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220601033807-2342 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220601033807-2342 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220601033807-2342 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.419766433s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220601033807-2342 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.4s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220601033807-2342 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220601033807-2342 --alsologtostderr -v=5: (2.402262371s)
--- PASS: TestMountStart/serial/DeleteFirst (2.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220601033807-2342 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220601033807-2342
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220601033807-2342: (1.605426555s)
--- PASS: TestMountStart/serial/Stop (1.61s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220601033807-2342
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220601033807-2342: (4.080653746s)
--- PASS: TestMountStart/serial/RestartStopped (5.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220601033807-2342 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220601033835-2342 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220601033835-2342 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m10.932134676s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.647652663s)
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- rollout status deployment/busybox: (3.012121824s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- exec busybox-7978565885-cx5xl -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- exec busybox-7978565885-sd76z -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- exec busybox-7978565885-cx5xl -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- exec busybox-7978565885-sd76z -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- exec busybox-7978565885-cx5xl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- exec busybox-7978565885-sd76z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- exec busybox-7978565885-cx5xl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- exec busybox-7978565885-cx5xl -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- exec busybox-7978565885-sd76z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601033835-2342 -- exec busybox-7978565885-sd76z -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220601033835-2342 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220601033835-2342 -v 3 --alsologtostderr: (25.881985965s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status --alsologtostderr: (1.114262663s)
--- PASS: TestMultiNode/serial/AddNode (27.00s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.52s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (16.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status --output json --alsologtostderr: (1.184222813s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 cp testdata/cp-test.txt multinode-20220601033835-2342:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 cp multinode-20220601033835-2342:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile3389582863/001/cp-test_multinode-20220601033835-2342.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 cp multinode-20220601033835-2342:/home/docker/cp-test.txt multinode-20220601033835-2342-m02:/home/docker/cp-test_multinode-20220601033835-2342_multinode-20220601033835-2342-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342-m02 "sudo cat /home/docker/cp-test_multinode-20220601033835-2342_multinode-20220601033835-2342-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 cp multinode-20220601033835-2342:/home/docker/cp-test.txt multinode-20220601033835-2342-m03:/home/docker/cp-test_multinode-20220601033835-2342_multinode-20220601033835-2342-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342-m03 "sudo cat /home/docker/cp-test_multinode-20220601033835-2342_multinode-20220601033835-2342-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 cp testdata/cp-test.txt multinode-20220601033835-2342-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 cp multinode-20220601033835-2342-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile3389582863/001/cp-test_multinode-20220601033835-2342-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 cp multinode-20220601033835-2342-m02:/home/docker/cp-test.txt multinode-20220601033835-2342:/home/docker/cp-test_multinode-20220601033835-2342-m02_multinode-20220601033835-2342.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342 "sudo cat /home/docker/cp-test_multinode-20220601033835-2342-m02_multinode-20220601033835-2342.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 cp multinode-20220601033835-2342-m02:/home/docker/cp-test.txt multinode-20220601033835-2342-m03:/home/docker/cp-test_multinode-20220601033835-2342-m02_multinode-20220601033835-2342-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342-m03 "sudo cat /home/docker/cp-test_multinode-20220601033835-2342-m02_multinode-20220601033835-2342-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 cp testdata/cp-test.txt multinode-20220601033835-2342-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 cp multinode-20220601033835-2342-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile3389582863/001/cp-test_multinode-20220601033835-2342-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 cp multinode-20220601033835-2342-m03:/home/docker/cp-test.txt multinode-20220601033835-2342:/home/docker/cp-test_multinode-20220601033835-2342-m03_multinode-20220601033835-2342.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342 "sudo cat /home/docker/cp-test_multinode-20220601033835-2342-m03_multinode-20220601033835-2342.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 cp multinode-20220601033835-2342-m03:/home/docker/cp-test.txt multinode-20220601033835-2342-m02:/home/docker/cp-test_multinode-20220601033835-2342-m03_multinode-20220601033835-2342-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 ssh -n multinode-20220601033835-2342-m02 "sudo cat /home/docker/cp-test_multinode-20220601033835-2342-m03_multinode-20220601033835-2342-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (16.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (14.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 node stop m03
E0601 03:40:49.602537    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601033835-2342 node stop m03: (12.477941103s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status: exit status 7 (841.992243ms)

                                                
                                                
-- stdout --
	multinode-20220601033835-2342
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220601033835-2342-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220601033835-2342-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status --alsologtostderr: exit status 7 (877.016741ms)

                                                
                                                
-- stdout --
	multinode-20220601033835-2342
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220601033835-2342-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220601033835-2342-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 03:40:51.620765    6907 out.go:296] Setting OutFile to fd 1 ...
	I0601 03:40:51.620977    6907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:40:51.620982    6907 out.go:309] Setting ErrFile to fd 2...
	I0601 03:40:51.620987    6907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:40:51.621091    6907 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 03:40:51.621250    6907 out.go:303] Setting JSON to false
	I0601 03:40:51.621265    6907 mustload.go:65] Loading cluster: multinode-20220601033835-2342
	I0601 03:40:51.621550    6907 config.go:178] Loaded profile config "multinode-20220601033835-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 03:40:51.621562    6907 status.go:253] checking status of multinode-20220601033835-2342 ...
	I0601 03:40:51.621919    6907 cli_runner.go:164] Run: docker container inspect multinode-20220601033835-2342 --format={{.State.Status}}
	I0601 03:40:51.721873    6907 status.go:328] multinode-20220601033835-2342 host status = "Running" (err=<nil>)
	I0601 03:40:51.721904    6907 host.go:66] Checking if "multinode-20220601033835-2342" exists ...
	I0601 03:40:51.722181    6907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220601033835-2342
	I0601 03:40:51.795081    6907 host.go:66] Checking if "multinode-20220601033835-2342" exists ...
	I0601 03:40:51.795344    6907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 03:40:51.795397    6907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601033835-2342
	I0601 03:40:51.867361    6907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55823 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/multinode-20220601033835-2342/id_rsa Username:docker}
	I0601 03:40:51.952935    6907 ssh_runner.go:195] Run: systemctl --version
	I0601 03:40:51.957301    6907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 03:40:51.966245    6907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220601033835-2342
	I0601 03:40:52.037916    6907 kubeconfig.go:92] found "multinode-20220601033835-2342" server: "https://127.0.0.1:55822"
	I0601 03:40:52.037940    6907 api_server.go:165] Checking apiserver status ...
	I0601 03:40:52.037980    6907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 03:40:52.047519    6907 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1568/cgroup
	W0601 03:40:52.055293    6907 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1568/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0601 03:40:52.055329    6907 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55822/healthz ...
	I0601 03:40:52.060817    6907 api_server.go:266] https://127.0.0.1:55822/healthz returned 200:
	ok
	I0601 03:40:52.060828    6907 status.go:419] multinode-20220601033835-2342 apiserver status = Running (err=<nil>)
	I0601 03:40:52.060836    6907 status.go:255] multinode-20220601033835-2342 status: &{Name:multinode-20220601033835-2342 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0601 03:40:52.060848    6907 status.go:253] checking status of multinode-20220601033835-2342-m02 ...
	I0601 03:40:52.061073    6907 cli_runner.go:164] Run: docker container inspect multinode-20220601033835-2342-m02 --format={{.State.Status}}
	I0601 03:40:52.133548    6907 status.go:328] multinode-20220601033835-2342-m02 host status = "Running" (err=<nil>)
	I0601 03:40:52.133568    6907 host.go:66] Checking if "multinode-20220601033835-2342-m02" exists ...
	I0601 03:40:52.133817    6907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220601033835-2342-m02
	I0601 03:40:52.207246    6907 host.go:66] Checking if "multinode-20220601033835-2342-m02" exists ...
	I0601 03:40:52.207499    6907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 03:40:52.207544    6907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601033835-2342-m02
	I0601 03:40:52.279520    6907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56012 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/multinode-20220601033835-2342-m02/id_rsa Username:docker}
	I0601 03:40:52.363264    6907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 03:40:52.372912    6907 status.go:255] multinode-20220601033835-2342-m02 status: &{Name:multinode-20220601033835-2342-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0601 03:40:52.372942    6907 status.go:253] checking status of multinode-20220601033835-2342-m03 ...
	I0601 03:40:52.373191    6907 cli_runner.go:164] Run: docker container inspect multinode-20220601033835-2342-m03 --format={{.State.Status}}
	I0601 03:40:52.445633    6907 status.go:328] multinode-20220601033835-2342-m03 host status = "Stopped" (err=<nil>)
	I0601 03:40:52.445654    6907 status.go:341] host is not running, skipping remaining checks
	I0601 03:40:52.445661    6907 status.go:255] multinode-20220601033835-2342-m03 status: &{Name:multinode-20220601033835-2342-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (14.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (25.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601033835-2342 node start m03 --alsologtostderr: (23.997862395s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status: (1.181832276s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (25.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (119.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220601033835-2342
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220601033835-2342
E0601 03:41:49.040569    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220601033835-2342: (37.160089937s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220601033835-2342 --wait=true -v=8 --alsologtostderr
E0601 03:43:12.090725    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220601033835-2342 --wait=true -v=8 --alsologtostderr: (1m21.915939408s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220601033835-2342
--- PASS: TestMultiNode/serial/RestartKeepsNodes (119.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (18.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601033835-2342 node delete m03: (16.617087825s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:422: (dbg) Done: kubectl get nodes: (1.455819288s)
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (18.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601033835-2342 stop: (24.962176395s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status: exit status 7 (182.945705ms)

                                                
                                                
-- stdout --
	multinode-20220601033835-2342
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220601033835-2342-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status --alsologtostderr: exit status 7 (179.994612ms)

                                                
                                                
-- stdout --
	multinode-20220601033835-2342
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220601033835-2342-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 03:44:01.109943    7379 out.go:296] Setting OutFile to fd 1 ...
	I0601 03:44:01.110133    7379 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:44:01.110138    7379 out.go:309] Setting ErrFile to fd 2...
	I0601 03:44:01.110141    7379 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 03:44:01.110234    7379 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 03:44:01.110389    7379 out.go:303] Setting JSON to false
	I0601 03:44:01.110404    7379 mustload.go:65] Loading cluster: multinode-20220601033835-2342
	I0601 03:44:01.110685    7379 config.go:178] Loaded profile config "multinode-20220601033835-2342": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 03:44:01.110698    7379 status.go:253] checking status of multinode-20220601033835-2342 ...
	I0601 03:44:01.111055    7379 cli_runner.go:164] Run: docker container inspect multinode-20220601033835-2342 --format={{.State.Status}}
	I0601 03:44:01.174135    7379 status.go:328] multinode-20220601033835-2342 host status = "Stopped" (err=<nil>)
	I0601 03:44:01.174159    7379 status.go:341] host is not running, skipping remaining checks
	I0601 03:44:01.174166    7379 status.go:255] multinode-20220601033835-2342 status: &{Name:multinode-20220601033835-2342 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0601 03:44:01.174192    7379 status.go:253] checking status of multinode-20220601033835-2342-m02 ...
	I0601 03:44:01.174472    7379 cli_runner.go:164] Run: docker container inspect multinode-20220601033835-2342-m02 --format={{.State.Status}}
	I0601 03:44:01.239000    7379 status.go:328] multinode-20220601033835-2342-m02 host status = "Stopped" (err=<nil>)
	I0601 03:44:01.239035    7379 status.go:341] host is not running, skipping remaining checks
	I0601 03:44:01.239046    7379 status.go:255] multinode-20220601033835-2342-m02 status: &{Name:multinode-20220601033835-2342-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (77.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220601033835-2342 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220601033835-2342 --wait=true -v=8 --alsologtostderr --driver=docker : (1m15.087054996s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601033835-2342 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:372: (dbg) Done: kubectl get nodes: (1.45670153s)
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (77.49s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (29.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220601033835-2342
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220601033835-2342-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220601033835-2342-m02 --driver=docker : exit status 14 (388.543749ms)

                                                
                                                
-- stdout --
	* [multinode-20220601033835-2342-m02] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220601033835-2342-m02' is duplicated with machine name 'multinode-20220601033835-2342-m02' in profile 'multinode-20220601033835-2342'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220601033835-2342-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220601033835-2342-m03 --driver=docker : (25.496289069s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220601033835-2342
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220601033835-2342: exit status 80 (520.821246ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220601033835-2342
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220601033835-2342-m03 already exists in multinode-20220601033835-2342-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220601033835-2342-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220601033835-2342-m03: (2.920482197s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (29.38s)

                                                
                                    
x
+
TestScheduledStopUnix (98.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220601035017-2342 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220601035017-2342 --memory=2048 --driver=docker : (24.282665981s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220601035017-2342 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220601035017-2342 -n scheduled-stop-20220601035017-2342
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220601035017-2342 -n scheduled-stop-20220601035017-2342: exit status 85 (174.802579ms)

                                                
                                                
-- stdout --
	* Profile "scheduled-stop-20220601035017-2342" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p scheduled-stop-20220601035017-2342"

                                                
                                                
-- /stdout --
scheduled_stop_test.go:191: status error: exit status 85 (may be ok)
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220601035017-2342 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220601035017-2342 --cancel-scheduled
E0601 03:50:49.611608    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220601035017-2342 -n scheduled-stop-20220601035017-2342
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220601035017-2342
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220601035017-2342 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0601 03:51:49.045339    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220601035017-2342
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220601035017-2342: exit status 7 (117.066369ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220601035017-2342
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220601035017-2342 -n scheduled-stop-20220601035017-2342
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220601035017-2342 -n scheduled-stop-20220601035017-2342: exit status 7 (113.777777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220601035017-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220601035017-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220601035017-2342: (2.432969508s)
--- PASS: TestScheduledStopUnix (98.46s)

                                                
                                    
x
+
TestSkaffold (57.4s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe305145086 version
skaffold_test.go:63: skaffold version: v1.38.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220601035156-2342 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220601035156-2342 --memory=2600 --driver=docker : (23.61924075s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe305145086 run --minikube-profile skaffold-20220601035156-2342 --kube-context skaffold-20220601035156-2342 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe305145086 run --minikube-profile skaffold-20220601035156-2342 --kube-context skaffold-20220601035156-2342 --status-check=true --port-forward=false --interactive=false: (19.384864321s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-58d8776c5c-ngkz5" [a0cd6ca3-21d2-4f3a-ba6c-936dc3c16e94] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012805246s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-64978d7c97-7kjr6" [01fccb58-2d6a-4921-a6be-c4294a8ed228] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.008819s
helpers_test.go:175: Cleaning up "skaffold-20220601035156-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220601035156-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220601035156-2342: (3.006375981s)
--- PASS: TestSkaffold (57.40s)

                                                
                                    
x
+
TestInsufficientStorage (13.25s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220601035253-2342 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220601035253-2342 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.823365892s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"70d0def3-ea20-4b7d-84cd-ae05c66969bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220601035253-2342] minikube v1.26.0-beta.1 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a060b2b-9ad1-43f8-942d-5d2692663b1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"da034700-132e-4b8c-8b03-d6436af7bd1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig"}}
	{"specversion":"1.0","id":"a768aeaf-fe87-4f93-81d9-d1341b661fbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"69a1b695-73b7-4d07-8caa-670d0c60edb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e09c8138-22fe-462d-ae68-348d199e485a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube"}}
	{"specversion":"1.0","id":"4d16a7f5-ee9a-4c4d-a85f-75bacf3b9aa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"03a5bf2a-9e4e-4ec4-96f3-75e3e19a9653","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cab448a8-4d20-4e6c-a8cb-c9b136369510","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dcb0a30e-b437-457b-9411-8d47c92a58af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"3993f63c-fe26-4264-a6bf-fc006b4669aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220601035253-2342 in cluster insufficient-storage-20220601035253-2342","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4987449-e8ae-4f0a-b39e-7dc7acd25b5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f2c2213-2c9b-47cf-a6f7-0c5efefbbd1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd6af9c3-93b3-4af8-89a2-1bdac0187726","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220601035253-2342 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220601035253-2342 --output=json --layout=cluster: exit status 7 (436.119456ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220601035253-2342","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220601035253-2342","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 03:53:03.896783    8473 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220601035253-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220601035253-2342 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220601035253-2342 --output=json --layout=cluster: exit status 7 (431.053128ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220601035253-2342","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220601035253-2342","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 03:53:04.329299    8483 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220601035253-2342" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	E0601 03:53:04.337666    8483 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/insufficient-storage-20220601035253-2342/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220601035253-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220601035253-2342
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220601035253-2342: (2.549467012s)
--- PASS: TestInsufficientStorage (13.25s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (5.6s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.26.0-beta.1 on darwin
- MINIKUBE_LOCATION=14079
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1114321170/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1114321170/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1114321170/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1114321170/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (5.60s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (8.59s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.26.0-beta.1 on darwin
- MINIKUBE_LOCATION=14079
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1391749534/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1391749534/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1391749534/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1391749534/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (8.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220601035914-2342
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220601035914-2342: (3.805801248s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.81s)

                                                
                                    
x
+
TestPause/serial/Start (76.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220601040007-2342 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0601 04:00:24.467894    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
E0601 04:00:49.612208    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220601040007-2342 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m16.493199424s)
--- PASS: TestPause/serial/Start (76.49s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220601040007-2342 --alsologtostderr -v=1 --driver=docker 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220601040007-2342 --alsologtostderr -v=1 --driver=docker : (6.464500571s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.48s)

                                                
                                    
x
+
TestPause/serial/Pause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220601040007-2342 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220601040237-2342 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220601040237-2342 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (345.405423ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220601040237-2342] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220601040237-2342 --driver=docker 
E0601 04:02:40.617653    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220601040237-2342 --driver=docker : (25.61484225s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220601040237-2342 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220601040237-2342 --no-kubernetes --driver=docker 
E0601 04:03:08.310429    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220601040237-2342 --no-kubernetes --driver=docker : (13.850060186s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220601040237-2342 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220601040237-2342 status -o json: exit status 2 (445.808941ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220601040237-2342","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220601040237-2342
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220601040237-2342: (2.679067619s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220601040237-2342 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220601040237-2342 --no-kubernetes --driver=docker : (6.478013084s)
--- PASS: TestNoKubernetes/serial/Start (6.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220601040237-2342 "sudo systemctl is-active --quiet service kubelet"

                                                
                                                
=== CONT  TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220601040237-2342 "sudo systemctl is-active --quiet service kubelet": exit status 1 (655.690757ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220601040237-2342

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220601040237-2342: (1.740436369s)
--- PASS: TestNoKubernetes/serial/Stop (1.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220601040237-2342 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220601040237-2342 --driver=docker : (4.424676043s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220601040237-2342 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220601040237-2342 "sudo systemctl is-active --quiet service kubelet": exit status 1 (501.638861ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (51.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220601035306-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
E0601 04:03:52.678347    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220601035306-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (51.24711732s)
--- PASS: TestNetworkPlugins/group/auto/Start (51.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220601035307-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-20220601035307-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (47.593752947s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220601035306-2342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220601035306-2342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context auto-20220601035306-2342 replace --force -f testdata/netcat-deployment.yaml: (1.744448104s)
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-4w95m" [e0d86e3e-d34e-4682-9065-98fb65daee6c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-4w95m" [e0d86e3e-d34e-4682-9065-98fb65daee6c] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.010586147s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220601035306-2342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220601035306-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220601035306-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220601035306-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.111156383s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (79.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220601035308-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220601035308-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m19.7591062s)
--- PASS: TestNetworkPlugins/group/cilium/Start (79.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-p6hg9" [ede471f9-e044-4b9e-a5b1-4d99b8c82020] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.01460138s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-20220601035307-2342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220601035307-2342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kindnet-20220601035307-2342 replace --force -f testdata/netcat-deployment.yaml: (1.73121048s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-rf68l" [6d8dc066-0d01-426b-9380-b6ac2bf24208] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-rf68l" [6d8dc066-0d01-426b-9380-b6ac2bf24208] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007815022s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220601035307-2342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220601035307-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220601035307-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220601035308-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
E0601 04:05:49.615374    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-20220601035308-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (1m10.240137798s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-g6qrt" [c68f6b6d-8e86-4fb5-926f-98e0b026dc18] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.016362694s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220601035308-2342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220601035308-2342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-20220601035308-2342 replace --force -f testdata/netcat-deployment.yaml: (2.388309625s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-mpfm7" [d5909367-d71d-4476-8eac-4e4d480a9da9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-mpfm7" [d5909367-d71d-4476-8eac-4e4d480a9da9] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.007244998s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220601035308-2342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220601035308-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220601035308-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (51.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220601035307-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220601035307-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (51.652024058s)
--- PASS: TestNetworkPlugins/group/false/Start (51.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-pgg9x" [042fa820-8d01-4509-aaa7-af8f1694a780] Running
E0601 04:06:49.074147    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.019898256s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-20220601035308-2342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220601035308-2342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context calico-20220601035308-2342 replace --force -f testdata/netcat-deployment.yaml: (1.792841617s)
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-nwrfx" [7da60999-0353-4e1a-92fb-94f74315ec69] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-nwrfx" [7da60999-0353-4e1a-92fb-94f74315ec69] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.009074639s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220601035308-2342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-20220601035308-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-20220601035308-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220601035306-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220601035306-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (41.22970413s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220601035307-2342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220601035307-2342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context false-20220601035307-2342 replace --force -f testdata/netcat-deployment.yaml: (1.598766706s)
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-6fffr" [b2d6e62e-b38d-4e11-b98d-2f91a0a87a64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-6fffr" [b2d6e62e-b38d-4e11-b98d-2f91a0a87a64] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.006780711s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220601035307-2342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220601035307-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220601035307-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0601 04:07:40.644444    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220601035307-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.106922952s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (41.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220601035306-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220601035306-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (41.544138573s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (41.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220601035306-2342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220601035306-2342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context bridge-20220601035306-2342 replace --force -f testdata/netcat-deployment.yaml: (1.942592076s)
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-sb2pb" [0b815103-f21c-445c-b759-7efe387fc823] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-sb2pb" [0b815103-f21c-445c-b759-7efe387fc823] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.008892542s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220601035306-2342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220601035306-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220601035306-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (52.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220601035306-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20220601035306-2342 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (52.157782399s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (52.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220601035306-2342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220601035306-2342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context enable-default-cni-20220601035306-2342 replace --force -f testdata/netcat-deployment.yaml: (2.240524892s)
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-5ckhv" [cd05fb93-4411-49eb-a2e0-506f8fb6b420] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-5ckhv" [cd05fb93-4411-49eb-a2e0-506f8fb6b420] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.01082132s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601035306-2342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220601035306-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220601035306-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20220601035306-2342 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220601035306-2342 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kubenet-20220601035306-2342 replace --force -f testdata/netcat-deployment.yaml: (1.64198894s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-mnv5x" [abbe58b7-9c44-4d28-9317-4fdec3d03c26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-mnv5x" [abbe58b7-9c44-4d28-9317-4fdec3d03c26] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.00917299s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601035306-2342 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20220601035306-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220601035306-2342 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220601040915-2342 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6
E0601 04:09:31.620917    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:09:31.627300    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:09:31.637866    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:09:31.660028    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:09:31.700134    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:09:31.824733    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:09:31.985670    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:09:32.307826    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:09:32.950251    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:09:34.230441    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:09:36.791620    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:09:41.911987    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:09:52.152312    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220601040915-2342 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6: (40.578672414s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220601040915-2342 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context embed-certs-20220601040915-2342 create -f testdata/busybox.yaml: (1.709745994s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [1386d21c-a661-4abd-b72c-d618d675ebfd] Pending
helpers_test.go:342: "busybox" [1386d21c-a661-4abd-b72c-d618d675ebfd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [1386d21c-a661-4abd-b72c-d618d675ebfd] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.014836542s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220601040915-2342 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220601040915-2342 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220601040915-2342 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220601040915-2342 --alsologtostderr -v=3
E0601 04:10:12.634810    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:10:14.653137    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:10:14.658219    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:10:14.669113    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:10:14.689375    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:10:14.729550    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:10:14.811139    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:10:14.973324    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:10:15.294484    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:10:15.936947    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:10:17.217888    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:10:19.778123    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220601040915-2342 --alsologtostderr -v=3: (12.570494695s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342: exit status 7 (116.490491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220601040915-2342 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (332.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220601040915-2342 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6
E0601 04:10:24.898463    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:10:35.138911    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:10:49.640875    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 04:10:53.596174    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:10:55.620915    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:11:11.911917    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:11.917054    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:11.929242    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:11.949742    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:11.989957    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:12.071826    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:12.231981    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:12.552385    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:13.193400    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:14.475680    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:17.035989    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:22.158347    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:32.398933    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:36.583661    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:11:46.987981    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:46.994397    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:47.005013    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:47.027279    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:47.069519    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:47.150261    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:47.310635    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:47.631171    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:48.271384    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:49.079066    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 04:11:49.552671    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:52.113475    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:52.881495    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:11:57.235563    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:12:07.475973    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:12:15.519085    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:12:27.532915    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:27.538130    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:27.550347    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:27.570551    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:27.610760    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:27.691053    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:27.851593    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:27.957386    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:12:28.172024    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:28.813278    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:30.093513    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:32.654140    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:33.842892    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:12:37.774689    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:40.648405    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
E0601 04:12:48.014972    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:12:51.668632    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:12:51.674050    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:12:51.684173    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:12:51.771822    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:12:51.813267    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:12:51.894607    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:12:52.055211    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:12:52.375309    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:12:53.017635    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:12:54.299957    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220601040915-2342 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6: (5m32.315799927s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220601040915-2342 -n embed-certs-20220601040915-2342
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (332.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220601040844-2342 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220601040844-2342 --alsologtostderr -v=3: (1.640746236s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601040844-2342 -n old-k8s-version-20220601040844-2342: exit status 7 (117.348029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220601040844-2342 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-7fjk8" [60ef3c6e-81c0-49c9-b5fb-f366fbe635ba] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-8469778f77-7fjk8" [60ef3c6e-81c0-49c9-b5fb-f366fbe635ba] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.016345414s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-7fjk8" [60ef3c6e-81c0-49c9-b5fb-f366fbe635ba] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010278114s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220601040915-2342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0601 04:16:11.916628    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
start_stop_delete_test.go:293: (dbg) Done: kubectl --context embed-certs-20220601040915-2342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.646498926s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220601040915-2342 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220601041659-2342 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6
E0601 04:17:14.683245    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:17:27.535664    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
E0601 04:17:40.651464    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601035156-2342/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220601041659-2342 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6: (51.493897074s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220601041659-2342 create -f testdata/busybox.yaml
E0601 04:17:51.670675    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
start_stop_delete_test.go:198: (dbg) Done: kubectl --context no-preload-20220601041659-2342 create -f testdata/busybox.yaml: (1.630974967s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [63140e4e-44c4-477c-9204-9b19d1ef8b99] Pending
helpers_test.go:342: "busybox" [63140e4e-44c4-477c-9204-9b19d1ef8b99] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0601 04:17:55.224068    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory
helpers_test.go:342: "busybox" [63140e4e-44c4-477c-9204-9b19d1ef8b99] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.015101319s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220601041659-2342 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220601041659-2342 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220601041659-2342 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220601041659-2342 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220601041659-2342 --alsologtostderr -v=3: (12.561239209s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342: exit status 7 (117.146984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220601041659-2342 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (330.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220601041659-2342 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6
E0601 04:18:19.435215    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601035306-2342/client.crt: no such file or directory
E0601 04:18:30.398995    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:18:58.097249    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory
E0601 04:19:00.585603    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:19:28.276044    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
E0601 04:19:31.625750    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601035306-2342/client.crt: no such file or directory
E0601 04:20:14.661661    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601035307-2342/client.crt: no such file or directory
E0601 04:20:32.713155    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 04:20:49.647379    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601032413-2342/client.crt: no such file or directory
E0601 04:21:11.918195    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601035308-2342/client.crt: no such file or directory
E0601 04:21:47.027947    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
E0601 04:21:49.117365    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory
E0601 04:22:27.577228    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601035307-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220601041659-2342 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6: (5m30.051030847s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220601041659-2342 -n no-preload-20220601041659-2342
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (330.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-jxm74" [1978d3c4-656a-4b2d-87a0-a796070dbce3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-8469778f77-jxm74" [1978d3c4-656a-4b2d-87a0-a796070dbce3] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.01455772s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-jxm74" [1978d3c4-656a-4b2d-87a0-a796070dbce3] Running
E0601 04:24:00.627538    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601035306-2342/client.crt: no such file or directory
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00656904s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220601041659-2342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Done: kubectl --context no-preload-20220601041659-2342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.6400994s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220601041659-2342 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (41.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220601042455-2342 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220601042455-2342 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6: (41.570179139s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (41.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220601042455-2342 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context default-k8s-different-port-20220601042455-2342 create -f testdata/busybox.yaml: (1.705725661s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [eb0a49fe-0a36-48a2-9d50-36221dcdfbbb] Pending
helpers_test.go:342: "busybox" [eb0a49fe-0a36-48a2-9d50-36221dcdfbbb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [eb0a49fe-0a36-48a2-9d50-36221dcdfbbb] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.015431651s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220601042455-2342 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220601042455-2342 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220601042455-2342 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (12.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220601042455-2342 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220601042455-2342 --alsologtostderr -v=3: (12.611470984s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (12.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342: exit status 7 (120.305686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220601042455-2342 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (334.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220601042455-2342 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220601042455-2342 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6: (5m33.786866661s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601042455-2342 -n default-k8s-different-port-20220601042455-2342
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (334.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-vsgbf" [5e30b028-d8e4-4995-a03e-f3039f2e629a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-8469778f77-vsgbf" [5e30b028-d8e4-4995-a03e-f3039f2e629a] Running
E0601 04:31:47.039804    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601035308-2342/client.crt: no such file or directory
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.014332975s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (13.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-vsgbf" [5e30b028-d8e4-4995-a03e-f3039f2e629a] Running
E0601 04:31:49.130783    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601032001-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006013411s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220601042455-2342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Done: kubectl --context default-k8s-different-port-20220601042455-2342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.595733028s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220601042455-2342 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220601043243-2342 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220601043243-2342 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6: (38.80909115s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220601043243-2342 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220601043243-2342 --alsologtostderr -v=3
E0601 04:33:30.448657    2342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-1198-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601035306-2342/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220601043243-2342 --alsologtostderr -v=3: (12.598115665s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342: exit status 7 (119.212104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220601043243-2342 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220601043243-2342 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220601043243-2342 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6: (18.195392239s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220601043243-2342 -n newest-cni-20220601043243-2342
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220601043243-2342 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.55s)

                                                
                                    

Test skip (18/288)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 14.017239ms
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-n4mzh" [a7297050-cf96-484a-8af3-a3b915056e58] Running
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011664245s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-trsgh" [af544bbf-9730-4b1a-b391-97932f831143] Running
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.01049717s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220601032001-2342 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220601032001-2342 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220601032001-2342 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.666403635s)
addons_test.go:305: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220601032001-2342 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220601032001-2342 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220601032001-2342 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [c820626b-9c2e-4f18-86a0-e15aa649ffac] Pending
helpers_test.go:342: "nginx" [c820626b-9c2e-4f18-86a0-e15aa649ffac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [c820626b-9c2e-4f18-86a0-e15aa649ffac] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.010441102s
addons_test.go:212: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220601032001-2342 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.23s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220601032413-2342 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220601032413-2342 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-dttsk" [a238a486-1d7f-40bd-b543-b67f5ccb2fe8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-dttsk" [a238a486-1d7f-40bd-b543-b67f5ccb2fe8] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.011839064s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220601035306-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220601035306-2342
--- SKIP: TestNetworkPlugins/group/flannel (0.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220601035307-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-20220601035307-2342
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.57s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220601040914-2342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220601040914-2342
--- SKIP: TestStartStop/group/disable-driver-mounts (0.57s)

                                                
                                    
Copied to clipboard